Header logo is


2018


Thumb xl octo turned
Real-time Perception meets Reactive Motion Generation

(Best Systems Paper Finalists - Amazon Robotics Best Paper Awards in Manipulation)

Kappler, D., Meier, F., Issac, J., Mainprice, J., Garcia Cifuentes, C., Wüthrich, M., Berenz, V., Schaal, S., Ratliff, N., Bohg, J.

IEEE Robotics and Automation Letters, 3(3):1864-1871, July 2018 (article)

Abstract
We address the challenging problem of robotic grasping and manipulation in the presence of uncertainty. This uncertainty is due to noisy sensing, inaccurate models and hard-to-predict environment dynamics. Our approach emphasizes the importance of continuous, real-time perception and its tight integration with reactive motion generation methods. We present a fully integrated system where real-time object and robot tracking as well as ambient world modeling provides the necessary input to feedback controllers and continuous motion optimizers. Specifically, they provide attractive and repulsive potentials based on which the controllers and motion optimizer can online compute movement policies at different time intervals. We extensively evaluate the proposed system on a real robotic platform in four scenarios that exhibit either challenging workspace geometry or a dynamic environment. We compare the proposed integrated system with a more traditional sense-plan-act approach that is still widely used. In 333 experiments, we show the robustness and accuracy of the proposed system.

am

arxiv video video link (url) DOI Project Page [BibTex]


no image
Differentially Private Database Release via Kernel Mean Embeddings

Balog, M., Tolstikhin, I., Schölkopf, B.

Proceedings of the 35th International Conference on Machine Learning (ICML), 80, pages: 423-431, Proceedings of Machine Learning Research, (Editors: Dy, Jennifer and Krause, Andreas), PMLR, July 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
On Matching Pursuit and Coordinate Descent

Locatello, F., Raj, A., Praneeth Karimireddy, S., Rätsch, G., Schölkopf, B., Stich, S. U., Jaggi, M.

Proceedings of the 35th International Conference on Machine Learning (ICML), 80, pages: 3204-3213, Proceedings of Machine Learning Research, (Editors: Dy, Jennifer and Krause, Andreas), PMLR, July 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Iterative Model-Fitting and Local Controller Optimization - Towards a Better Understanding of Convergence Properties

Wüthrich, M., Schölkopf, B.

Workshop on Prediction and Generative Modeling in Reinforcement Learning at ICML, July 2018 (conference)

ei

PDF link (url) [BibTex]

PDF link (url) [BibTex]


no image
Counterfactual Mean Embedding: A Kernel Method for Nonparametric Causal Inference

Muandet, K., Kanagawa, M., Saengkyongam, S., Marukata, S.

Workshop on Machine Learning for Causal Inference, Counterfactual Prediction, and Autonomous Action (CausalML) at ICML, July 2018 (conference)

ei

[BibTex]

[BibTex]


Thumb xl screen shot 2018 06 29 at 4.24.39 pm
Innate turning preference of leaf-cutting ants in the absence of external orientation cues

Endlein, T., Sitti, M.

Journal of Experimental Biology, The Company of Biologists Ltd, June 2018 (article)

Abstract
Many ants use a combination of cues for orientation but how do ants find their way when all external cues are suppressed? Do they walk in a random way or are their movements spatially oriented? Here we show for the first time that leaf-cutting ants (Acromyrmex lundii) have an innate preference of turning counter-clockwise (left) when external cues are precluded. We demonstrated this by allowing individual ants to run freely on the water surface of a newly-developed treadmill. The surface tension supported medium-sized workers but effectively prevented ants from reaching the wall of the vessel, important to avoid wall-following behaviour (thigmotaxis). Most ants ran for minutes on the spot but also slowly turned counter-clockwise in the absence of visual cues. Reconstructing the effectively walked path revealed a looping pattern which could be interpreted as a search strategy. A similar turning bias was shown for groups of ants in a symmetrical Y-maze where twice as many ants chose the left branch in the absence of optical cues. Wall-following behaviour was tested by inserting a coiled tube before the Y-fork. When ants traversed a left-coiled tube, more ants chose the left box and vice versa. Adding visual cues in form of vertical black strips either outside the treadmill or on one branch of the Y-maze led to oriented walks towards the strips. It is suggested that both, the turning bias and the wall-following are employed as search strategies for an unknown environment which can be overridden by visual cues.

pi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl 1
Motility and chemotaxis of bacteria-driven microswimmers fabricated using antigen 43-mediated biotin display

Schauer, O., Mostaghaci, B., Colin, R., Hürtgen, D., Kraus, D., Sitti, M., Sourjik, V.

Scientific Reports, 8(1):9801, Nature Publishing Group, June 2018 (article)

Abstract
Bacteria-driven biohybrid microswimmers (bacteriabots) combine synthetic cargo with motile living bacteria that enable propulsion and steering. Although fabrication and potential use of such bacteriabots have attracted much attention, existing methods of fabrication require an extensive sample preparation that can drastically decrease the viability and motility of bacteria. Moreover, chemotactic behavior of bacteriabots in a liquid medium with chemical gradients has remained largely unclear. To overcome these shortcomings, we designed Escherichia coli to autonomously display biotin on its cell surface via the engineered autotransporter antigen 43 and thus to bind streptavidin-coated cargo. We show that the cargo attachment to these bacteria is greatly enhanced by motility and occurs predominantly at the cell poles, which is greatly beneficial for the fabrication of motile bacteriabots. We further performed a systemic study to understand and optimize the ability of these bacteriabots to follow chemical gradients. We demonstrate that the chemotaxis of bacteriabots is primarily limited by the cargo-dependent reduction of swimming speed and show that the fabrication of bacteriabots using elongated E. coli cells can be used to overcome this limitation.

pi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl unbenannte pr%c3%a4sentation
Event-triggered Learning for Resource-efficient Networked Control

Solowjow, F., Baumann, D., Garcke, J., Trimpe, S.

In Proceedings of the 2018 American Control Conference (ACC), pages: 6506 - 6512, American Control Conference, June 2018 (inproceedings)

ics

arXiv PDF DOI Project Page [BibTex]

arXiv PDF DOI Project Page [BibTex]


Thumb xl 41586 2018 250 fig1 html
Multifunctional ferrofluid-infused surfaces with reconfigurable multiscale topography

Wang, W., Timonen, J. V. I., Carlson, A., Drotlef, D., Zhang, C. T., Kolle, S., Grinthal, A., Wong, T., Hatton, B., Kang, S. H., Kennedy, S., Chi, J., Blough, R. T., Sitti, M., Mahadevan, L., Aizenberg, J.

Nature, June 2018 (article)

Abstract
Developing adaptive materials with geometries that change in response to external stimuli provides fundamental insights into the links between the physical forces involved and the resultant morphologies and creates a foundation for technologically relevant dynamic systems1,2. In particular, reconfigurable surface topography as a means to control interfacial properties 3 has recently been explored using responsive gels 4 , shape-memory polymers 5 , liquid crystals6-8 and hybrid composites9-14, including magnetically active slippery surfaces12-14. However, these designs exhibit a limited range of topographical changes and thus a restricted scope of function. Here we introduce a hierarchical magneto-responsive composite surface, made by infiltrating a ferrofluid into a microstructured matrix (termed ferrofluid-containing liquid-infused porous surfaces, or FLIPS). We demonstrate various topographical reconfigurations at multiple length scales and a broad range of associated emergent behaviours. An applied magnetic-field gradient induces the movement of magnetic nanoparticles suspended in the ferrofluid, which leads to microscale flow of the ferrofluid first above and then within the microstructured surface. This redistribution changes the initially smooth surface of the ferrofluid (which is immobilized by the porous matrix through capillary forces) into various multiscale hierarchical topographies shaped by the size, arrangement and orientation of the confining microstructures in the magnetic field. We analyse the spatial and temporal dynamics of these reconfigurations theoretically and experimentally as a function of the balance between capillary and magnetic pressures15-19 and of the geometric anisotropy of the FLIPS system. Several interesting functions at three different length scales are demonstrated: self-assembly of colloidal particles at the micrometre scale; regulated flow of liquid droplets at the millimetre scale; and switchable adhesion and friction, liquid pumping and removal of biofilms at the centimetre scale. We envision that FLIPS could be used as part of integrated control systems for the manipulation and transport of matter, thermal management, microfluidics and fouling-release materials.

pi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Reducing 3D Vibrations to 1D in Real Time

Park, G., Kuchenbecker, K. J.

Pisa, Italy, June 2018, Hands-on demonstration presented at EuroHaptics (misc)

Abstract
In this demonstration, you will hold two pen-shaped modules: an in-pen and an out-pen. The in-pen is instrumented with a high-bandwidth three-axis accelerometer, and the out-pen contains a one-axis voice coil actuator. Use the in-pen to interact with different surfaces; the measured 3D accelerations are continually converted into 1D vibrations and rendered with the out-pen for you to feel. You can test conversion methods that range from simply selecting a single axis to applying a discrete Fourier transform or principal component analysis for realistic and brisk real-time conversion.

hi

[BibTex]

[BibTex]


Thumb xl image
Conditional Affordance Learning for Driving in Urban Environments
Conference on Robot Learning 2018, 2018 (conference)

avg

pdf suppmat Video [BibTex]


no image
Infinite Factorial Finite State Machine for Blind Multiuser Channel Estimation

Ruiz, F. J. R., Valera, I., Svensson, L., Perez-Cruz, F.

IEEE Transactions on Cognitive Communications and Networking, 4(2):177-191, June 2018 (article)

ei

DOI [BibTex]

DOI [BibTex]


Thumb xl 2017 frvsr
Frame-Recurrent Video Super-Resolution

Sajjadi, M. S. M., Vemulapalli, R., Brown, M.

IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , June 2018 (conference)

ei

ArXiv link (url) [BibTex]

ArXiv link (url) [BibTex]


Thumb xl screen shot 2018 03 22 at 10.40.47 am
Oncilla robot: a versatile open-source quadruped research robot with compliant pantograph legs

Spröwitz, A., Tuleu, A., Ajallooeian, M., Vespignani, M., Moeckel, R., Eckert, P., D’Haene, M., Degrave, J., Nordmann, A., Schrauwen, B., Steil, J., Ijspeert, A. J.

Frontiers in Robotics and AI, 5(67), June 2018, arXiv: 1803.06259 (article)

Abstract
We present Oncilla robot, a novel mobile, quadruped legged locomotion machine. This large-cat sized, 5.1 robot is one of a kind of a recent, bioinspired legged robot class designed with the capability of model-free locomotion control. Animal legged locomotion in rough terrain is clearly shaped by sensor feedback systems. Results with Oncilla robot show that agile and versatile locomotion is possible without sensory signals to some extend, and tracking becomes robust when feedback control is added (Ajaoolleian 2015). By incorporating mechanical and control blueprints inspired from animals, and by observing the resulting robot locomotion characteristics, we aim to understand the contribution of individual components. Legged robots have a wide mechanical and control design parameter space, and a unique potential as research tools to investigate principles of biomechanics and legged locomotion control. But the hardware and controller design can be a steep initial hurdle for academic research. To facilitate the easy start and development of legged robots, Oncilla-robot's blueprints are available through open-source. [...]

dlg

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


no image
Haptipedia: Exploring Haptic Device Design Through Interactive Visualizations

Seifi, H., Fazlollahi, F., Park, G., Kuchenbecker, K. J., MacLean, K. E.

Hands-on demonstration presented at the EuroHaptics Conference, June 2018 (misc)

Abstract
How many haptic devices have been proposed in the last 30 years? How can we leverage this rich source of design knowledge to inspire future innovations? Our goal is to make historical haptic invention accessible through interactive visualization of a comprehensive library – a Haptipedia – of devices that have been annotated with designer-relevant metadata. In this demonstration, participants can explore Haptipedia’s growing library of grounded force feedback devices through several prototype visualizations, interact with 3D simulations of the device mechanisms and movements, and tell us about the attributes and devices that could make Haptipedia a useful resource for the haptic design community.

hi

Project Page [BibTex]

Project Page [BibTex]


no image
Learning Face Deblurring Fast and Wide

Jin, M., Hirsch, M., Favaro, P.

The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pages: 745-753, June 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Thumb xl koala
Designing a Haptic Empathetic Robot Animal for Children with Autism

Burns, R., Kuchenbecker, K. J.

Workshop paper (4 pages) at the RSS Workshop on Robot-Mediated Autism Intervention: Hardware, Software and Curriculum, June 2018 (misc)

Abstract
Children with autism often endure sensory overload, may be nonverbal, and have difficulty understanding and relaying emotions. These experiences result in heightened stress during social interaction. Animal-assisted intervention has been found to improve the behavior of children with autism during social interaction, but live animal companions are not always feasible. We are thus in the process of designing a robotic animal to mimic some successful characteristics of animal-assisted intervention while trying to improve on others. The over-arching hypothesis of this research is that an appropriately designed robot animal can reduce stress in children with autism and empower them to engage in social interaction.

hi

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Soft Multi-Axis Boundary-Electrode Tactile Sensors for Whole-Body Robotic Skin

Lee, H., Kim, J., Kuchenbecker, K. J.

Workshop paper (2 pages) at the RSS Pioneers Workshop, June 2018 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


Thumb xl screen shot 2018 04 18 at 11.01.27 am
Learning from Outside the Viability Kernel: Why we Should Build Robots that can Fail with Grace

Heim, S., Spröwitz, A.

Proceedings of SIMPAR 2018, pages: 55-61, IEEE, 2018 IEEE International Conference on Simulation, Modeling, and Programming for Autonomous Robots (SIMPAR), May 2018 (conference)

dlg

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


Thumb xl selfsensing
Self-Sensing Paper Actuators Based on Graphite–Carbon Nanotube Hybrid Films

Morteza, A., Metin, S.

Advanced Science, pages: 1800239, May 2018 (article)

Abstract
Abstract Soft actuators have demonstrated potential in a range of applications, including soft robotics, artificial muscles, and biomimetic devices. However, the majority of current soft actuators suffer from the lack of real-time sensory feedback, prohibiting their effective sensing and multitask function. Here, a promising strategy is reported to design bilayer electrothermal actuators capable of simultaneous actuation and sensation (i.e., self-sensing actuators), merely through two input electric terminals. Decoupled electrothermal stimulation and strain sensation is achieved by the optimal combination of graphite microparticles and carbon nanotubes (CNTs) in the form of hybrid films. By finely tuning the charge transport properties of hybrid films, the signal-to-noise ratio (SNR) of self-sensing actuators is remarkably enhanced to over 66. As a result, self-sensing actuators can actively track their displacement and distinguish the touch of soft and hard objects.

pi

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


Thumb xl propultion. of helical m
Bioinspired microrobots

Palagi, S., Fischer, P.

Nature Reviews Materials, 3, pages: 113–124, May 2018 (article)

Abstract
Microorganisms can move in complex media, respond to the environment and self-organize. The field of microrobotics strives to achieve these functions in mobile robotic systems of sub-millimetre size. However, miniaturization of traditional robots and their control systems to the microscale is not a viable approach. A promising alternative strategy in developing microrobots is to implement sensing, actuation and control directly in the materials, thereby mimicking biological matter. In this Review, we discuss design principles and materials for the implementation of robotic functionalities in microrobots. We examine different biological locomotion strategies, and we discuss how they can be artificially recreated in magnetic microrobots and how soft materials improve control and performance. We show that smart, stimuli-responsive materials can act as on-board sensors and actuators and that ‘active matter’ enables autonomous motion, navigation and collective behaviours. Finally, we provide a critical outlook for the field of microrobotics and highlight the challenges that need to be overcome to realize sophisticated microrobots, which one day might rival biological machines.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Wasserstein Auto-Encoders

Tolstikhin, I., Bousquet, O., Gelly, S., Schölkopf, B.

6th International Conference on Learning Representations (ICLR), May 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Thumb xl teaser results
Adversarial Collaboration: Joint Unsupervised Learning of Depth, Camera Motion, Optical Flow and Motion Segmentation

Ranjan, A., Jampani, V., Kim, K., Sun, D., Wulff, J., Black, M. J.

May 2018 (article)

Abstract
We address the unsupervised learning of several interconnected problems in low-level vision: single view depth prediction, camera motion estimation, optical flow and segmentation of a video into the static scene and moving regions. Our key insight is that these four fundamental vision problems are coupled and, consequently, learning to solve them together simplifies the problem because the solutions can reinforce each other by exploiting known geometric constraints. In order to model geometric constraints, we introduce Adversarial Collaboration, a framework that facilitates competition and collaboration between neural networks. We go beyond previous work by exploiting geometry more explicitly and segmenting the scene into static and moving regions. Adversarial Collaboration works much like expectation-maximization but with neural networks that act as adversaries, competing to explain pixels that correspond to static or moving regions, and as collaborators through a moderator that assigns pixels to be either static or independently moving. Our novel method integrates all these problems in a common framework and simultaneously reasons about the segmentation of the scene into moving objects and the static background, the camera motion, depth of the static scene structure, and the optical flow of moving objects. Our model is trained without any supervision and achieves state of the art results amongst unsupervised methods.

ps

pdf link (url) [BibTex]


no image
Fidelity-Weighted Learning

Dehghani, M., Mehrjou, A., Gouws, S., Kamps, J., Schölkopf, B.

6th International Conference on Learning Representations (ICLR), May 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Inducing Probabilistic Context-Free Grammars for the Sequencing of Movement Primitives

Lioutikov, R., Maeda, G., Veiga, F., Kersting, K., Peters, J.

IEEE International Conference on Robotics and Automation, (ICRA), pages: 1-8, IEEE, May 2018 (conference)

ei

DOI [BibTex]

DOI [BibTex]


no image
Sobolev GAN

Mroueh, Y., Li*, C., Sercu*, T., Raj*, A., Cheng, Y.

6th International Conference on Learning Representations (ICLR), May 2018, *equal contribution (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Assisting Movement Training and Execution With Visual and Haptic Feedback

Ewerton, M., Rother, D., Weimar, J., Kollegger, G., Wiemeyer, J., Peters, J., Maeda, G.

Frontiers in Neurorobotics, 12, May 2018 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Soft Miniaturized Linear Actuators Wirelessly Powered by Rotating Permanent Magnets

Qiu, Tian, Palagi, Stefano, Sachs, Johannes, Fischer, Peer

In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages: 3595-3600, May 2018 (inproceedings)

Abstract
Wireless actuation by magnetic fields allows for the operation of untethered miniaturized devices, e.g. in biomedical applications. Nevertheless, generating large controlled forces over relatively large distances is challenging. Magnetic torques are easier to generate and control, but they are not always suitable for the tasks at hand. Moreover, strong magnetic fields are required to generate a sufficient torque, which are difficult to achieve with electromagnets. Here, we demonstrate a soft miniaturized actuator that transforms an externally applied magnetic torque into a controlled linear force. We report the design, fabrication and characterization of both the actuator and the magnetic field generator. We show that the magnet assembly, which is based on a set of rotating permanent magnets, can generate strong controlled oscillating fields over a relatively large workspace. The actuator, which is 3D-printed, can lift a load of more than 40 times its weight. Finally, we show that the actuator can be further miniaturized, paving the way towards strong, wirelessly powered microactuators.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Temporal Difference Models: Model-Free Deep RL for Model-Based Control

Pong*, V., Gu*, S., Dalal, M., Levine, S.

6th International Conference on Learning Representations (ICLR), May 2018, *equal contribution (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Thumb xl andrease teaser 2
Robust Dense Mapping for Large-Scale Dynamic Environments

Barsan, I. A., Liu, P., Pollefeys, M., Geiger, A.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2018, IEEE, International Conference on Robotics and Automation, May 2018 (inproceedings)

Abstract
We present a stereo-based dense mapping algorithm for large-scale dynamic urban environments. In contrast to other existing methods, we simultaneously reconstruct the static background, the moving objects, and the potentially moving but currently stationary objects separately, which is desirable for high-level mobile robotic tasks such as path planning in crowded environments. We use both instance-aware semantic segmentation and sparse scene flow to classify objects as either background, moving, or potentially moving, thereby ensuring that the system is able to model objects with the potential to transition from static to dynamic, such as parked cars. Given camera poses estimated from visual odometry, both the background and the (potentially) moving objects are reconstructed separately by fusing the depth maps computed from the stereo input. In addition to visual odometry, sparse scene flow is also used to estimate the 3D motions of the detected moving objects, in order to reconstruct them accurately. A map pruning technique is further developed to improve reconstruction accuracy and reduce memory consumption, leading to increased scalability. We evaluate our system thoroughly on the well-known KITTI dataset. Our system is capable of running on a PC at approximately 2.5Hz, with the primary bottleneck being the instance-aware semantic segmentation, which is a limitation we hope to address in future work.

avg

pdf Video Project Page [BibTex]

pdf Video Project Page [BibTex]


no image
Wasserstein Auto-Encoders: Latent Dimensionality and Random Encoders

Rubenstein, P. K., Schölkopf, B., Tolstikhin, I.

Workshop at the 6th International Conference on Learning Representations (ICLR), May 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Thumb xl tslip
Impact of Trunk Orientation for Dynamic Bipedal Locomotion

Drama, O.

Dynamic Walking Conference, May 2018 (talk)

Abstract
Impact of trunk orientation for dynamic bipedal locomotion My research revolves around investigating the functional demands of bipedal running, with focus on stabilizing trunk orientation. When we think about postural stability, there are two critical questions we need to answer: What are the necessary and sufficient conditions to achieve and maintain trunk stability? I am concentrating on how morphology affects control strategies in achieving trunk stability. In particular, I denote the trunk pitch as the predominant morphology parameter and explore the requirements it imposes on a chosen control strategy. To analyze this, I use a spring loaded inverted pendulum model extended with a rigid trunk, which is actuated by a hip motor. The challenge for the controller design here is to have a single hip actuator to achieve two coupled tasks of moving the legs to generate motion and stabilizing the trunk. I enforce orthograde and pronograde postures and aim to identify the effect of these trunk orientations on the hip torque and ground reaction profiles for different control strategies.

dlg

Impact of trunk orientation for dynamic bipedal locomotion [DW 2018] link (url) Project Page [BibTex]


no image
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning

Eysenbach, B., Gu, S., Ibarz, J., Levine, S.

6th International Conference on Learning Representations (ICLR), May 2018 (conference)

ei

Videos link (url) [BibTex]

Videos link (url) [BibTex]


Thumb xl meta learning overview
Online Learning of a Memory for Learning Rates

(nominated for best paper award)

Meier, F., Kappler, D., Schaal, S.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2018, IEEE, International Conference on Robotics and Automation, May 2018, accepted (inproceedings)

Abstract
The promise of learning to learn for robotics rests on the hope that by extracting some information about the learning process itself we can speed up subsequent similar learning tasks. Here, we introduce a computationally efficient online meta-learning algorithm that builds and optimizes a memory model of the optimal learning rate landscape from previously observed gradient behaviors. While performing task specific optimization, this memory of learning rates predicts how to scale currently observed gradients. After applying the gradient scaling our meta-learner updates its internal memory based on the observed effect its prediction had. Our meta-learner can be combined with any gradient-based optimizer, learns on the fly and can be transferred to new optimization tasks. In our evaluations we show that our meta-learning algorithm speeds up learning of MNIST classification and a variety of learning control tasks, either in batch or online learning settings.

am

pdf video code [BibTex]

pdf video code [BibTex]


Thumb xl screenshot 2018 05 18 16 38 40
Learning 3D Shape Completion under Weak Supervision

Stutz, D., Geiger, A.

Arxiv, May 2018 (article)

Abstract
We address the problem of 3D shape completion from sparse and noisy point clouds, a fundamental problem in computer vision and robotics. Recent approaches are either data-driven or learning-based: Data-driven approaches rely on a shape model whose parameters are optimized to fit the observations; Learning-based approaches, in contrast, avoid the expensive optimization step by learning to directly predict complete shapes from incomplete observations in a fully-supervised setting. However, full supervision is often not available in practice. In this work, we propose a weakly-supervised learning-based approach to 3D shape completion which neither requires slow optimization nor direct supervision. While we also learn a shape prior on synthetic data, we amortize, i.e., learn, maximum likelihood fitting using deep neural networks resulting in efficient shape completion without sacrificing accuracy. On synthetic benchmarks based on ShapeNet and ModelNet as well as on real robotics data from KITTI and Kinect, we demonstrate that the proposed amortized maximum likelihood approach is able to compete with fully supervised baselines and outperforms data-driven approaches, while requiring less supervision and being significantly faster.

avg

PDF Project Page Project Page [BibTex]


Thumb xl 2018 tgan
Tempered Adversarial Networks

Sajjadi, M. S. M., Parascandolo, G., Mehrjou, A., Schölkopf, B.

Workshop at the 6th International Conference on Learning Representations (ICLR), May 2018 (conference)

ei

arXiv [BibTex]

arXiv [BibTex]


no image
Learning Coupled Forward-Inverse Models with Combined Prediction Errors

Koert, D., Maeda, G., Neumann, G., Peters, J.

IEEE International Conference on Robotics and Automation, (ICRA), pages: 2433-2439, IEEE, May 2018 (conference)

ei

DOI [BibTex]

DOI [BibTex]


no image
Learning Disentangled Representations with Wasserstein Auto-Encoders

Rubenstein, P. K., Schölkopf, B., Tolstikhin, I.

Workshop at the 6th International Conference on Learning Representations (ICLR), May 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Thumb xl screen shot 2018 02 03 at 9.09.06 am
Shaping in Practice: Training Wheels to Learn Fast Hopping Directly in Hardware

Heim, S., Ruppert, F., Sarvestani, A., Spröwitz, A.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2018, pages: 5076-5081, IEEE, International Conference on Robotics and Automation, May 2018 (inproceedings)

Abstract
Learning instead of designing robot controllers can greatly reduce engineering effort required, while also emphasizing robustness. Despite considerable progress in simulation, applying learning directly in hardware is still challenging, in part due to the necessity to explore potentially unstable parameters. We explore the of concept shaping the reward landscape with training wheels; temporary modifications of the physical hardware that facilitate learning. We demonstrate the concept with a robot leg mounted on a boom learning to hop fast. This proof of concept embodies typical challenges such as instability and contact, while being simple enough to empirically map out and visualize the reward landscape. Based on our results we propose three criteria for designing effective training wheels for learning in robotics.

dlg

Video Youtube link (url) Project Page [BibTex]

Video Youtube link (url) Project Page [BibTex]


no image
Automatic Estimation of Modulation Transfer Functions

Bauer, M., Volchkov, V., Hirsch, M., Schölkopf, B.

IEEE International Conference on Computational Photography (ICCP), May 2018 (conference)

ei sf

DOI [BibTex]

DOI [BibTex]


Thumb xl learning ct w asm block diagram detailed
Learning Sensor Feedback Models from Demonstrations via Phase-Modulated Neural Networks

Sutanto, G., Su, Z., Schaal, S., Meier, F.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2018, IEEE, International Conference on Robotics and Automation, May 2018 (inproceedings)

am

pdf video [BibTex]

pdf video [BibTex]


no image
Nonlinear decoding of a complex movie from the mammalian retina

Botella-Soler, V., Deny, S., Martius, G., Marre, O., Tkačik, G.

PLOS Computational Biology, 14(5):1-27, Public Library of Science, May 2018 (article)

Abstract
Author summary Neurons in the retina transform patterns of incoming light into sequences of neural spikes. We recorded from ∼100 neurons in the rat retina while it was stimulated with a complex movie. Using machine learning regression methods, we fit decoders to reconstruct the movie shown from the retinal output. We demonstrated that retinal code can only be read out with a low error if decoders make use of correlations between successive spikes emitted by individual neurons. These correlations can be used to ignore spontaneous spiking that would, otherwise, cause even the best linear decoders to “hallucinate” nonexistent stimuli. This work represents the first high resolution single-trial full movie reconstruction and suggests a new paradigm for separating spontaneous from stimulus-driven neural activity.

al

DOI [BibTex]

DOI [BibTex]


no image
Causal Discovery Using Proxy Variables

Rojas-Carulla, M., Baroni, M., Lopez-Paz, D.

Workshop at 6th International Conference on Learning Representations (ICLR), May 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Sample and Feedback Efficient Hierarchical Reinforcement Learning from Human Preferences

Pinsler, R., Akrour, R., Osa, T., Peters, J., Neumann, G.

IEEE International Conference on Robotics and Automation, (ICRA), pages: 596-601, IEEE, May 2018 (conference)

ei

DOI [BibTex]

DOI [BibTex]


Thumb xl graphene silver hybrid
Graphene-silver hybrid devices for sensitive photodetection in the ultraviolet

Paria, D., Jeong, H. H., Vadakkumbatt, V., Deshpande, P., Fischer, P., Ghosh, A., Ghosh, A.

Nanoscale, 10, pages: 7685-7693, April 2018 (article)

Abstract
The weak light-matter interaction in graphene can be enhanced with a number of strategies, among which sensitization with plasmonic nanostructures is particularly attractive. This has resulted in the development of graphene-plasmonic hybrid systems with strongly enhanced photodetection efficiencies in the visible and the IR, but none in the UV. Here, we describe a silver nanoparticle-graphene stacked optoelectronic device that shows strong enhancement of its photoresponse across the entire UV spectrum. The device fabrication strategy is scalable and modular. Self-assembly techniques are combined with physical shadow growth techniques to fabricate a regular large-area array of 50 nm silver nanoparticles onto which CVD graphene is transferred. The presence of the silver nanoparticles resulted in a plasmonically enhanced photoresponse as high as 3.2 A W-1 in the wavelength range from 330 nm to 450 nm. At lower wavelengths, close to the Van Hove singularity of the density of states in graphene, we measured an even higher responsivity of 14.5 A W-1 at 280 nm, which corresponds to a more than 10 000-fold enhancement over the photoresponse of native graphene.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl focus cover
Nanoparticles on the move for medicine

Fischer, P.

Physics World Focus on Nanotechnology, pages: 26028, (Editors: Margaret Harris), IOP Publishing Ltd and individual contributors, April 2018 (article)

Abstract
Peer Fischer outlines the prospects for creating “nanoswimmers” that can be steered through the body to deliver drugs directly to their targets Molecules don’t move very fast on their own. If they had to rely solely on diffusion – a slow and inefficient process linked to the Brownian motion of small particles and molecules in solution – then a protein mole­cule, for instance, would take around three weeks to travel a single centimetre down a nerve fibre. This is why active transport mechanisms exist in cells and in the human body: without them, all the processes of life would happen at a pace that would make snails look speedy.

pf

link (url) [BibTex]

link (url) [BibTex]


no image
Group invariance principles for causal generative models

Besserve, M., Shajarisales, N., Schölkopf, B., Janzing, D.

Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS), 84, pages: 557-565, Proceedings of Machine Learning Research, (Editors: Amos Storkey and Fernando Perez-Cruz), PMLR, April 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Poster Abstract: Toward Fast Closed-loop Control over Multi-hop Low-power Wireless Networks

Mager, F., Baumann, D., Trimpe, S., Zimmerling, M.

Proceedings of the 17th ACM/IEEE Conference on Information Processing in Sensor Networks (IPSN), pages: 158-159, Porto, Portugal, April 2018 (poster)

ics

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Boosting Variational Inference: an Optimization Perspective

Locatello, F., Khanna, R., Ghosh, J., Rätsch, G.

Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS), 84, pages: 464-472, Proceedings of Machine Learning Research, (Editors: Amos Storkey and Fernando Perez-Cruz), PMLR, April 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Mixture of Attractors: A Novel Movement Primitive Representation for Learning Motor Skills From Demonstrations

Manschitz, S., Gienger, M., Kober, J., Peters, J.

IEEE Robotics and Automation Letters, 3(2):926-933, April 2018 (article)

ei

DOI [BibTex]

DOI [BibTex]