Header logo is


2019


Thumb xl teaser singlecol
Attacking Optical Flow

Ranjan, A., Janai, J., Geiger, A., Black, M. J.

In International Conference on Computer Vision, November 2019 (inproceedings)

Abstract
Deep neural nets achieve state-of-the-art performance on the problem of optical flow estimation. Since optical flow is used in several safety-critical applications like self-driving cars, it is important to gain insights into the robustness of those techniques. Recently, it has been shown that adversarial attacks easily fool deep neural networks to misclassify objects. The robustness of optical flow networks to adversarial attacks, however, has not been studied so far. In this paper, we extend adversarial patch attacks to optical flow networks and show that such attacks can compromise their performance. We show that corrupting a small patch of less than 1% of the image size can significantly affect optical flow estimates. Our attacks lead to noisy flow estimates that extend significantly beyond the region of the attack, in many cases even completely erasing the motion of objects in the scene. While networks using an encoder-decoder architecture are very sensitive to these attacks, we found that networks using a spatial pyramid architecture are less affected. We analyse the success and failure of attacking both architectures by visualizing their feature maps and comparing them to classical optical flow techniques which are robust to these attacks. We also demonstrate that such attacks are practical by placing a printed pattern into real scenes.

avg ps

Video Project Page Paper Supplementary Material link (url) [BibTex]

2019


Video Project Page Paper Supplementary Material link (url) [BibTex]


Thumb xl 0050 samples slip fig
A Learnable Safety Measure

Heim, S., Rohr, A. V., Trimpe, S., Badri-Spröwitz, A.

Conference on Robot Learning, November 2019 (conference) Accepted

dlg ics

Arxiv [BibTex]

Arxiv [BibTex]


Thumb xl teaser
EM-Fusion: Dynamic Object-Level SLAM With Probabilistic Data Association

Strecke, M., Stückler, J.

In International Conference on Computer Vision, October 2019, arXiv:1904.11781 (inproceedings)

ev

preprint Project page Poster [BibTex]

preprint Project page Poster [BibTex]


Thumb xl occ flow
Occupancy Flow: 4D Reconstruction by Learning Particle Dynamics

Niemeyer, M., Mescheder, L., Oechsle, M., Geiger, A.

International Conference on Computer Vision, October 2019 (conference)

Abstract
Deep learning based 3D reconstruction techniques have recently achieved impressive results. However, while state-of-the-art methods are able to output complex 3D geometry, it is not clear how to extend these results to time-varying topologies. Approaches treating each time step individually lack continuity and exhibit slow inference, while traditional 4D reconstruction methods often utilize a template model or discretize the 4D space at fixed resolution. In this work, we present Occupancy Flow, a novel spatio-temporal representation of time-varying 3D geometry with implicit correspondences. Towards this goal, we learn a temporally and spatially continuous vector field which assigns a motion vector to every point in space and time. In order to perform dense 4D reconstruction from images or sparse point clouds, we combine our method with a continuous 3D representation. Implicitly, our model yields correspondences over time, thus enabling fast inference while providing a sound physical description of the temporal dynamics. We show that our method can be used for interpolation and reconstruction tasks, and demonstrate the accuracy of the learned correspondences. We believe that Occupancy Flow is a promising new 4D representation which will be useful for a variety of spatio-temporal reconstruction tasks.

avg

pdf poster suppmat code Project page video blog [BibTex]


Thumb xl tex felds
Texture Fields: Learning Texture Representations in Function Space

Oechsle, M., Mescheder, L., Niemeyer, M., Strauss, T., Geiger, A.

International Conference on Computer Vision, October 2019 (conference)

Abstract
In recent years, substantial progress has been achieved in learning-based reconstruction of 3D objects. At the same time, generative models were proposed that can generate highly realistic images. However, despite this success in these closely related tasks, texture reconstruction of 3D objects has received little attention from the research community and state-of-the-art methods are either limited to comparably low resolution or constrained experimental setups. A major reason for these limitations is that common representations of texture are inefficient or hard to interface for modern deep learning techniques. In this paper, we propose Texture Fields, a novel texture representation which is based on regressing a continuous 3D function parameterized with a neural network. Our approach circumvents limiting factors like shape discretization and parameterization, as the proposed texture representation is independent of the shape representation of the 3D object. We show that Texture Fields are able to represent high frequency texture and naturally blend with modern deep learning techniques. Experimentally, we find that Texture Fields compare favorably to state-of-the-art methods for conditional texture reconstruction of 3D objects and enable learning of probabilistic generative models for texturing unseen 3D models. We believe that Texture Fields will become an important building block for the next generation of generative 3D models.

avg

pdf suppmat video poster blog Project Page [BibTex]


Thumb xl screenshot 2019 08 30 at 15.45.28
Trunk Pitch Oscillations for Joint Load Redistribution in Humans and Humanoid Robots

Drama, Ö., Badri-Spröwitz, A.

Proceedings International Conference on Humanoid Robots, Humanoids, September 2019 (conference) Accepted

dlg

link (url) [BibTex]

link (url) [BibTex]


Thumb xl screen shot 2019 04 18 at 5.55.23 pm
Series Elastic Behavior of Biarticular Muscle-Tendon Structure in a Robotic Leg

Ruppert, F., Badri-Spröwitz, A.

Frontiers in Neurorobotics, 64, pages: 13, 13, August 2019 (article)

dlg

Frontiers YouTube link (url) DOI [BibTex]

Frontiers YouTube link (url) DOI [BibTex]


Thumb xl screen shot 2019 04 19 at 11.29.37 am
The positive side of damping

Heim, S., Millard, M., Le Mouel, C., Sproewitz, A.

Proceedings of AMAM, The 9th International Symposium on Adaptive Motion of Animals and Machines, August 2019 (conference) Accepted

dlg

[BibTex]

[BibTex]


Thumb xl screenshot 2019 08 19 at 13.54.21
Beyond Basins of Attraction: Quantifying Robustness of Natural Dynamics

Steve Heim, , Spröwitz, A.

IEEE Transactions on Robotics (T-RO) , 35(4), pages: 939-952, August 2019 (article)

Abstract
Properly designing a system to exhibit favorable natural dynamics can greatly simplify designing or learning the control policy. However, it is still unclear what constitutes favorable natural dynamics and how to quantify its effect. Most studies of simple walking and running models have focused on the basins of attraction of passive limit cycles and the notion of self-stability. We instead emphasize the importance of stepping beyond basins of attraction. In this paper, we show an approach based on viability theory to quantify robust sets in state-action space. These sets are valid for the family of all robust control policies, which allows us to quantify the robustness inherent to the natural dynamics before designing the control policy or specifying a control objective. We illustrate our formulation using spring-mass models, simple low-dimensional models of running systems. We then show an example application by optimizing robustness of a simulated planar monoped, using a gradient-free optimization scheme. Both case studies result in a nonlinear effective stiffness providing more robustness.

dlg

arXiv preprint arXiv:1806.08081 T-RO link (url) DOI Project Page [BibTex]

arXiv preprint arXiv:1806.08081 T-RO link (url) DOI Project Page [BibTex]


Thumb xl lv
Taking a Deeper Look at the Inverse Compositional Algorithm

Lv, Z., Dellaert, F., Rehg, J. M., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2019, June 2019 (inproceedings)

Abstract
In this paper, we provide a modern synthesis of the classic inverse compositional algorithm for dense image alignment. We first discuss the assumptions made by this well-established technique, and subsequently propose to relax these assumptions by incorporating data-driven priors into this model. More specifically, we unroll a robust version of the inverse compositional algorithm and replace multiple components of this algorithm using more expressive models whose parameters we train in an end-to-end fashion from data. Our experiments on several challenging 3D rigid motion estimation tasks demonstrate the advantages of combining optimization with learning-based techniques, outperforming the classic inverse compositional algorithm as well as data-driven image-to-pose regression approaches.

avg

pdf suppmat Video Project Page Poster [BibTex]

pdf suppmat Video Project Page Poster [BibTex]


Thumb xl mots
MOTS: Multi-Object Tracking and Segmentation

Voigtlaender, P., Krause, M., Osep, A., Luiten, J., Sekar, B. B. G., Geiger, A., Leibe, B.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2019, June 2019 (inproceedings)

Abstract
This paper extends the popular task of multi-object tracking to multi-object tracking and segmentation (MOTS). Towards this goal, we create dense pixel-level annotations for two existing tracking datasets using a semi-automatic annotation procedure. Our new annotations comprise 65,213 pixel masks for 977 distinct objects (cars and pedestrians) in 10,870 video frames. For evaluation, we extend existing multi-object tracking metrics to this new task. Moreover, we propose a new baseline method which jointly addresses detection, tracking, and segmentation with a single convolutional network. We demonstrate the value of our datasets by achieving improvements in performance when training on MOTS annotations. We believe that our datasets, metrics and baseline will become a valuable resource towards developing multi-object tracking approaches that go beyond 2D bounding boxes.

avg

pdf suppmat Project Page Poster Video Project Page [BibTex]

pdf suppmat Project Page Poster Video Project Page [BibTex]


Thumb xl behl
PointFlowNet: Learning Representations for Rigid Motion Estimation from Point Clouds

Behl, A., Paschalidou, D., Donne, S., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2019, June 2019 (inproceedings)

Abstract
Despite significant progress in image-based 3D scene flow estimation, the performance of such approaches has not yet reached the fidelity required by many applications. Simultaneously, these applications are often not restricted to image-based estimation: laser scanners provide a popular alternative to traditional cameras, for example in the context of self-driving cars, as they directly yield a 3D point cloud. In this paper, we propose to estimate 3D motion from such unstructured point clouds using a deep neural network. In a single forward pass, our model jointly predicts 3D scene flow as well as the 3D bounding box and rigid body motion of objects in the scene. While the prospect of estimating 3D scene flow from unstructured point clouds is promising, it is also a challenging task. We show that the traditional global representation of rigid body motion prohibits inference by CNNs, and propose a translation equivariant representation to circumvent this problem. For training our deep network, a large dataset is required. Because of this, we augment real scans from KITTI with virtual objects, realistically modeling occlusions and simulating sensor noise. A thorough comparison with classic and learning-based techniques highlights the robustness of the proposed approach.

avg

pdf suppmat Project Page Poster Video [BibTex]

pdf suppmat Project Page Poster Video [BibTex]


Thumb xl liao
Connecting the Dots: Learning Representations for Active Monocular Depth Estimation

Riegler, G., Liao, Y., Donne, S., Koltun, V., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2019, June 2019 (inproceedings)

Abstract
We propose a technique for depth estimation with a monocular structured-light camera, \ie, a calibrated stereo set-up with one camera and one laser projector. Instead of formulating the depth estimation via a correspondence search problem, we show that a simple convolutional architecture is sufficient for high-quality disparity estimates in this setting. As accurate ground-truth is hard to obtain, we train our model in a self-supervised fashion with a combination of photometric and geometric losses. Further, we demonstrate that the projected pattern of the structured light sensor can be reliably separated from the ambient information. This can then be used to improve depth boundaries in a weakly supervised fashion by modeling the joint statistics of image and depth edges. The model trained in this fashion compares favorably to the state-of-the-art on challenging synthetic and real-world datasets. In addition, we contribute a novel simulator, which allows to benchmark active depth prediction algorithms in controlled conditions.

avg

pdf suppmat Poster Project Page [BibTex]

pdf suppmat Poster Project Page [BibTex]


Thumb xl donne
Learning Non-volumetric Depth Fusion using Successive Reprojections

Donne, S., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2019, June 2019 (inproceedings)

Abstract
Given a set of input views, multi-view stereopsis techniques estimate depth maps to represent the 3D reconstruction of the scene; these are fused into a single, consistent, reconstruction -- most often a point cloud. In this work we propose to learn an auto-regressive depth refinement directly from data. While deep learning has improved the accuracy and speed of depth estimation significantly, learned MVS techniques remain limited to the planesweeping paradigm. We refine a set of input depth maps by successively reprojecting information from neighbouring views to leverage multi-view constraints. Compared to learning-based volumetric fusion techniques, an image-based representation allows significantly more detailed reconstructions; compared to traditional point-based techniques, our method learns noise suppression and surface completion in a data-driven fashion. Due to the limited availability of high-quality reconstruction datasets with ground truth, we introduce two novel synthetic datasets to (pre-)train our network. Our approach is able to improve both the output depth maps and the reconstructed point cloud, for both learned and traditional depth estimation front-ends, on both synthetic and real data.

avg

pdf suppmat Project Page Video Poster blog [BibTex]

pdf suppmat Project Page Video Poster blog [BibTex]


Thumb xl superquadrics parsing
Superquadrics Revisited: Learning 3D Shape Parsing beyond Cuboids

Paschalidou, D., Ulusoy, A. O., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2019, June 2019 (inproceedings)

Abstract
Abstracting complex 3D shapes with parsimonious part-based representations has been a long standing goal in computer vision. This paper presents a learning-based solution to this problem which goes beyond the traditional 3D cuboid representation by exploiting superquadrics as atomic elements. We demonstrate that superquadrics lead to more expressive 3D scene parses while being easier to learn than 3D cuboid representations. Moreover, we provide an analytical solution to the Chamfer loss which avoids the need for computational expensive reinforcement learning or iterative prediction. Our model learns to parse 3D objects into consistent superquadric representations without supervision. Results on various ShapeNet categories as well as the SURREAL human body dataset demonstrate the flexibility of our model in capturing fine details and complex poses that could not have been modelled using cuboids.

avg

Project Page Poster suppmat pdf Video blog handout [BibTex]

Project Page Poster suppmat pdf Video blog handout [BibTex]


no image
Variational Autoencoders Recover PCA Directions (by Accident)

Rolinek, M., Zietlow, D., Martius, G.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2019, June 2019 (inproceedings)

Abstract
The Variational Autoencoder (VAE) is a powerful architecture capable of representation learning and generative modeling. When it comes to learning interpretable (disentangled) representations, VAE and its variants show unparalleled performance. However, the reasons for this are unclear, since a very particular alignment of the latent embedding is needed but the design of the VAE does not encourage it in any explicit way. We address this matter and offer the following explanation: the diagonal approximation in the encoder together with the inherent stochasticity force local orthogonality of the decoder. The local behavior of promoting both reconstruction and orthogonality matches closely how the PCA embedding is chosen. Alongside providing an intuitive understanding, we justify the statement with full theoretical analysis as well as with experiments.

al

arXiv link (url) Project Page [BibTex]

arXiv link (url) Project Page [BibTex]


Thumb xl icra 19 2
Real-Time Dense Mapping for Self-Driving Vehicles using Fisheye Cameras

Cui, Z., Heng, L., Yeo, Y. C., Geiger, A., Pollefeys, M., Sattler, T.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2019, IEEE, International Conference on Robotics and Automation, May 2019 (inproceedings)

Abstract
We present a real-time dense geometric mapping algorithm for large-scale environments. Unlike existing methods which use pinhole cameras, our implementation is based on fisheye cameras which have larger field of view and benefit some other tasks including Visual-Inertial Odometry, localization and object detection around vehicles. Our algorithm runs on in-vehicle PCs at 15 Hz approximately, enabling vision-only 3D scene perception for self-driving vehicles. For each synchronized set of images captured by multiple cameras, we first compute a depth map for a reference camera using plane-sweeping stereo. To maintain both accuracy and efficiency, while accounting for the fact that fisheye images have a rather low resolution, we recover the depths using multiple image resolutions. We adopt the fast object detection framework YOLOv3 to remove potentially dynamic objects. At the end of the pipeline, we fuse the fisheye depth images into the truncated signed distance function (TSDF) volume to obtain a 3D map. We evaluate our method on large-scale urban datasets, and results show that our method works well even in complex environments.

avg

pdf video poster Project Page [BibTex]

pdf video poster Project Page [BibTex]


Thumb xl icra19 1
Project AutoVision: Localization and 3D Scene Perception for an Autonomous Vehicle with a Multi-Camera System

Heng, L., Choi, B., Cui, Z., Geppert, M., Hu, S., Kuan, B., Liu, P., Nguyen, R. M. H., Yeo, Y. C., Geiger, A., Lee, G. H., Pollefeys, M., Sattler, T.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2019, IEEE, International Conference on Robotics and Automation, May 2019 (inproceedings)

Abstract
Project AutoVision aims to develop localization and 3D scene perception capabilities for a self-driving vehicle. Such capabilities will enable autonomous navigation in urban and rural environments, in day and night, and with cameras as the only exteroceptive sensors. The sensor suite employs many cameras for both 360-degree coverage and accurate multi-view stereo; the use of low-cost cameras keeps the cost of this sensor suite to a minimum. In addition, the project seeks to extend the operating envelope to include GNSS-less conditions which are typical for environments with tall buildings, foliage, and tunnels. Emphasis is placed on leveraging multi-view geometry and deep learning to enable the vehicle to localize and perceive in 3D space. This paper presents an overview of the project, and describes the sensor suite and current progress in the areas of calibration, localization, and perception.

avg

pdf [BibTex]

pdf [BibTex]


no image
X-ray Optics Fabrication Using Unorthodox Approaches

Sanli, U., Baluktsian, M., Ceylan, H., Sitti, M., Weigand, M., Schütz, G., Keskinbora, K.

Bulletin of the American Physical Society, APS, 2019 (article)

mms pi

[BibTex]

[BibTex]


no image
Nanoscale detection of spin wave deflection angles in permalloy

Gross, F., Träger, N., Förster, J., Weigand, M., Schütz, G., Gräfe, J.

{Applied Physics Letters}, 114(1), American Institute of Physics, Melville, NY, 2019 (article)

mms

DOI [BibTex]

DOI [BibTex]


no image
Generation of switchable singular beams with dynamic metasurfaces

Yu, P., Li, J., Li, X., Schütz, G., Hirscher, M., Zhang, S., Liu, N.

{ACS Nano}, 13(6):7100-7106, American Chemical Society, Washington, DC, 2019 (article)

mms

DOI [BibTex]

DOI [BibTex]


no image
Extracting the dynamic magnetic contrast in time-resolved X-ray transmission microscopy

Schaffers, T., Feggeler, T., Pile, S., Meckenstock, R., Buchner, M., Spoddig, D., Ney, V., Farle, M., Wende, H., Wintz, S., Weigand, M., Ohldag, H., Ollefs, K, Ney, A.

{Nanomaterials}, 9(7), MDPI, Basel, Schweiz, 2019 (article)

mms

DOI [BibTex]

DOI [BibTex]


no image
gFORC: A graphics processing unit accelerated first-order reversal-curve calculator

Groß, F., Mart\’\inez-Garc\’\ia, J. C., Ilse, S. E., Schütz, G., Goering, E., Rivas, M., Gräfe, J.

{Journal of Applied Physics}, 126(16), AIP Publishing, New York, NY, 2019 (article)

mms

DOI [BibTex]

DOI [BibTex]


no image
Piezo-electrical control of gyration dynamics of magnetic vortices

Filianina, M., Baldrati, L., Hajiri, T., Litzius, K., Foerster, M., Aballe, L., Kläui, M.

{Applied Physics Letters}, 115(6), American Institute of Physics, Melville, NY, 2019 (article)

mms

DOI [BibTex]

DOI [BibTex]


no image
Control What You Can: Intrinsically Motivated Task-Planning Agent

Blaes, S., Vlastelica, M., Zhu, J., Martius, G.

In Advances in Neural Information Processing (NeurIPS’19), Curran Associates, Inc., NeurIPS'19, 2019 (inproceedings)

Abstract
We present a novel intrinsically motivated agent that learns how to control the environment in the fastest possible manner by optimizing learning progress. It learns what can be controlled, how to allocate time and attention, and the relations between objects using surprise based motivation. The effectiveness of our method is demonstrated in a synthetic as well as a robotic manipulation environment yielding considerably improved performance and smaller sample complexity. In a nutshell, our work combines several task-level planning agent structures (backtracking search on task graph, probabilistic road-maps, allocation of search efforts) with intrinsic motivation to achieve learning from scratch.

al

link (url) [BibTex]

link (url) [BibTex]


no image
Barely porous organic cages for hydrogen isotrope separation

Liu, M., Zhang, L., Little, M. A., Kapil, V., Ceriotti, M., Yang, S., Ding, L., Holden, D. L., Balderas-Xicohténcatl, R., He, D., Clowes, R., Chong, S. Y., Schütz, G., Chen, L., Hirscher, M., Cooper, A. I.

{Science}, 366(6465):613-620, American Association for the Advancement of Science, Washington, D.C., 2019 (article)

mms

DOI [BibTex]

DOI [BibTex]


no image
Coherent excitation of heterosymmetric spin waves with ultrashort wavelengths

Dieterle, G., Förster, J., Stoll, H., Semisalova, A. S., Finizio, S., Gangwar, A., Weigand, M., Noske, M., Fähnle, M., Bykova, I., Gräfe, J., Bozhko, D. A., Musiienko-Shmarova, H. Y., Tiberkevich, V., Slavin, A. N., Back, C. H., Raabe, J., Schütz, G., Wintz, S.

{Physical Review Letters}, 122(11), American Physical Society, Woodbury, N.Y., 2019 (article)

mms

DOI [BibTex]

DOI [BibTex]


Thumb xl systemillustration
Autonomous Identification and Goal-Directed Invocation of Event-Predictive Behavioral Primitives

Gumbsch, C., Butz, M. V., Martius, G.

IEEE Transactions on Cognitive and Developmental Systems, 2019 (article)

Abstract
Voluntary behavior of humans appears to be composed of small, elementary building blocks or behavioral primitives. While this modular organization seems crucial for the learning of complex motor skills and the flexible adaption of behavior to new circumstances, the problem of learning meaningful, compositional abstractions from sensorimotor experiences remains an open challenge. Here, we introduce a computational learning architecture, termed surprise-based behavioral modularization into event-predictive structures (SUBMODES), that explores behavior and identifies the underlying behavioral units completely from scratch. The SUBMODES architecture bootstraps sensorimotor exploration using a self-organizing neural controller. While exploring the behavioral capabilities of its own body, the system learns modular structures that predict the sensorimotor dynamics and generate the associated behavior. In line with recent theories of event perception, the system uses unexpected prediction error signals, i.e., surprise, to detect transitions between successive behavioral primitives. We show that, when applied to two robotic systems with completely different body kinematics, the system manages to learn a variety of complex behavioral primitives. Moreover, after initial self-exploration the system can use its learned predictive models progressively more effectively for invoking model predictive planning and goal-directed control in different tasks and environments.

al

arXiv PDF video link (url) DOI Project Page [BibTex]


no image
Reprogrammability and Scalability of Magnonic Fibonacci Quasicrystals

Lisiecki, F., Rychły, J., Kuświk, P., Głowiński, H., Kłos, J. W., Groß, F., Bykova, I., Weigand, M., Zelent, M., Goering, E. J., Schütz, G., Gubbiotti, G., Krawczyk, M., Stobiecki, F., Dubowik, J., Gräfe, J.

Physical Review Applied, 11, pages: 054003, 2019 (article)

Abstract
Magnonic crystals are systems that can be used to design and tune the dynamic properties of magnetization. Here, we focus on one-dimensional Fibonacci magnonic quasicrystals. We confirm the existence of collective spin waves propagating through the structure as well as dispersionless modes; the reprogammability of the resonance frequencies, dependent on the magnetization order; and dynamic spin-wave interactions. With the fundamental understanding of these properties, we lay a foundation for the scalable and advanced design of spin-wave band structures for spintronic, microwave, and magnonic applications.

mms

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Coordinated molecule-modulated magnetic phase with metamagnetism in metal-organic frameworks

Son, K., Kim, J. Y., Schütz, G., Kang, S. G., Moon, H. R., Oh, H.

{Inorganic Chemistry}, 58(14):8895-8899, American Chemical Society, Washington, DC, 2019 (article)

mms

DOI [BibTex]

DOI [BibTex]


no image
A special issue on hydrogen-based Energy storage

Hirscher, M.

{International Journal of Hydrogen Energy}, 44, pages: 7737, Elsevier, Amsterdam, 2019 (misc)

mms

DOI [BibTex]

DOI [BibTex]


no image
Novel X-ray lenses for direct and coherent imaging

Sanli, U. T.

Universität Stuttgart, Stuttgart, 2019 (phdthesis)

mms

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Scaling of intrinsic domain wall magnetoresistance with confinement in electromigrated nanocontacts

Reeve, R. M., Loescher, A., Kazemi, H., Dupé, B., Mawass, M., Winkler, T., Schönke, D., Miao, J., Litzius, K., Sedlmayr, N., Schneider, I., Sinova, J., Eggert, S., Kläui, M.

{Physical Review B}, 99(21), American Physical Society, Woodbury, NY, 2019 (article)

mms

DOI [BibTex]

DOI [BibTex]


no image
Analytical classical density functionals from an equation learning network

Lin, S., Martius, G., Oettel, M.

2019, arXiv preprint \url{https://arxiv.org/abs/1910.12752} (misc)

al

[BibTex]

[BibTex]


Thumb xl screen shot 2019 04 19 at 11.36.04 am
Quantifying the Robustness of Natural Dynamics: a Viability Approach

Heim, S., Sproewitz, A.

Proceedings of Dynamic Walking , Dynamic Walking , 2019 (conference) Accepted

dlg

Submission DW2019 [BibTex]

Submission DW2019 [BibTex]


no image
Nanoscale X-ray imaging of spin dynamics in Yttrium iron garnet

Förster, J., Wintz, S., Bailey, J., Finizio, S., Josten, E., Meertens, D., Dubs, C., Bozhko, D. A., Stoll, H., Dieterle, G., Traeger, N., Raabe, J., Slavin, A. N., Weigand, M., Gräfe, J., Schütz, G.

2019 (misc)

mms

link (url) [BibTex]

link (url) [BibTex]


no image
Machine Learning for Haptics: Inferring Multi-Contact Stimulation From Sparse Sensor Configuration

Sun, H., Martius, G.

Frontiers in Neurorobotics, 13, pages: 51, 2019 (article)

Abstract
Robust haptic sensation systems are essential for obtaining dexterous robots. Currently, we have solutions for small surface areas such as fingers, but affordable and robust techniques for covering large areas of an arbitrary 3D surface are still missing. Here, we introduce a general machine learning framework to infer multi-contact haptic forces on a 3D robot’s limb surface from internal deformation measured by only a few physical sensors. The general idea of this framework is to predict first the whole surface deformation pattern from the sparsely placed sensors and then to infer number, locations and force magnitudes of unknown contact points. We show how this can be done even if training data can only be obtained for single-contact points using transfer learning at the example of a modified limb of the Poppy robot. With only 10 strain-gauge sensors we obtain a high accuracy also for multiple-contact points. The method can be applied to arbitrarily shaped surfaces and physical sensor types, as long as training data can be obtained.

al

link (url) DOI [BibTex]


no image
Magnons in a Quasicrystal: Propagation, Extinction, and Localization of Spin Waves in Fibonacci Structures

Lisiecki, F., Rychły, J., Kuświk, P., Głowiński, H., Kłos, J. W., Groß, F., Träger, N., Bykova, I., Weigand, M., Zelent, M., Goering, E. J., Schütz, G., Krawczyk, M., Stobiecki, F., Dubowik, J., Gräfe, J.

Physical Review Applied, 11, pages: 054061, 2019 (article)

Abstract
Magnonic quasicrystals exceed the possibilities of spin-wave (SW) manipulation offered by regular magnonic crystals, because of their more complex SW spectra with fractal characteristics. Here, we report the direct x-ray microscopic observation of propagating SWs in a magnonic quasicrystal, consisting of dipolar coupled permalloy nanowires arranged in a one-dimensional Fibonacci sequence. SWs from the first and second band as well as evanescent waves from the band gap between them are imaged. Moreover, additional mini band gaps in the spectrum are demonstrated, directly indicating an influence of the quasiperiodicity of the system. Finally, the localization of SW modes within the Fibonacci crystal is shown. The experimental results are interpreted using numerical calculations and we deduce a simple model to estimate the frequency position of the magnonic gaps in quasiperiodic structures. The demonstrated features of SW spectra in one-dimensional magnonic quasicrystals allow utilizing this class of metamaterials for magnonics and make them an ideal basis for future applications.

mms

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Reconfigurable nanoscale spin wave majority gate with frequency-division multiplexing

Talmelli, G., Devolder, T., Träger, N., Förster, J., Wintz, S., Weigand, M., Stoll, H., Heyns, M., Schütz, G., Radu, I., Gräfe, J., Ciubotaru, F., Adelmann, C.

2019 (misc)

Abstract
Spin waves are excitations in ferromagnetic media that have been proposed as information carriers in spintronic devices with potentially much lower operation power than conventional charge-based electronics. The wave nature of spin waves can be exploited to design majority gates by coding information in their phase and using interference for computation. However, a scalable spin wave majority gate design that can be co-integrated alongside conventional Si-based electronics is still lacking. Here, we demonstrate a reconfigurable nanoscale inline spin wave majority gate with ultrasmall footprint, frequency-division multiplexing, and fan-out. Time-resolved imaging of the magnetisation dynamics by scanning transmission x-ray microscopy reveals the operation mode of the device and validates the full logic majority truth table. All-electrical spin wave spectroscopy further demonstrates spin wave majority gates with sub-micron dimensions, sub-micron spin wave wavelengths, and reconfigurable input and output ports. We also show that interference-based computation allows for frequency-division multiplexing as well as the computation of different logic functions in the same device. Such devices can thus form the foundation of a future spin-wave-based superscalar vector computing platform.

mms

link (url) [BibTex]

link (url) [BibTex]


no image
Prototyping Micro- and Nano-Optics with Focused Ion Beam Lithography

Keskinbora, K.

SL48, pages: 46, SPIE.Spotlight, SPIE Press, Bellingham, WA, 2019 (book)

mms

DOI [BibTex]

DOI [BibTex]


no image
Structural and magnetic properties of FePt-Tb alloy thin films

Schmidt, N. Y., Laureti, S., Radu, F., Ryll, H., Luo, C., d\textquotesingleAcapito, F., Tripathi, S., Goering, E., Weller, D., Albrecht, M.

{Physical Review B}, 100(6), American Physical Society, Woodbury, NY, 2019 (article)

mms

DOI [BibTex]

DOI [BibTex]


no image
Tunable perpendicular exchange bias in oxide heterostructures

Kim, G., Khaydukov, Y., Bluschke, M., Suyolcu, Y. E., Christiani, G., Son, K., Dietl, C., Keller, T., Weschke, E., van Aken, P. A., Logvenov, G., Keimer, B.

{Physical Review Materials}, 3(8), American Physical Society, College Park, MD, 2019 (article)

mms

DOI [BibTex]

DOI [BibTex]


no image
Visual-Inertial Mapping with Non-Linear Factor Recovery

Usenko, V., Demmel, N., Schubert, D., Stückler, J., Cremers, D.

2019, arXiv:1904.06504 (misc)

ev

[BibTex]

[BibTex]


no image
Interpreting first-order reversal curves beyond the Preisach model: An experimental permalloy microarray investigation

Groß, F., Ilse, S. E., Schütz, G., Gräfe, J., Goering, E.

{Physical Review B}, 99(6), American Physical Society, Woodbury, NY, 2019 (article)

mms

DOI [BibTex]


no image
Bistability of magnetic states in Fe-Pd nanocap arrays

Aravind, P. B., Heigl, M., Fix, M., Groß, F., Gräfe, J., Mary, A., Rajgowrav, C. R., Krupiński, M., Marszałek, M., Thomas, S., Anantharaman, M. R., Albrecht, M.

Nanotechnology, 30, pages: 405705, 2019 (article)

Abstract
Magnetic bistability between vortex and single domain states in nanostructures are of great interest from both fundamental and technological perspectives. In soft magnetic nanostructures, the transition from a uniform collinear magnetic state to a vortex state (or vice versa) induced by a magnetic field involves an energy barrier. If the thermal energy is large enough for overcoming this energy barrier, magnetic bistability with a hysteresis-free switching occurs between the two magnetic states. In this work, we tune this energy barrier by tailoring the composition of FePd alloys, which were deposited onto self-assembled particle arrays forming magnetic vortex structures on top of the particles. The bifurcation temperature, where a hysteresis-free transition occurs, was extracted from the temperature dependence of the annihilation and nucleation field which increases almost linearly with Fe content of the magnetic alloy. This study provides insights into the magnetization reversal process associated with magnetic bistability, which allows adjusting the bifurcation temperature range by the material properties of the nanosystem.

mms

link (url) [BibTex]

link (url) [BibTex]


no image
Learning to Disentangle Latent Physical Factors for Video Prediction

Zhu, D., Munderloh, M., Rosenhahn, B., Stückler, J.

In German Conference on Pattern Recognition (GCPR), 2019, to appear (inproceedings)

ev

dataset & evaluation code video preprint [BibTex]

dataset & evaluation code video preprint [BibTex]


no image
An international laboratory comparison study of volumetric and gravimetric hydrogen adsorption measurements

Hurst, K. E., Gennett, T., Adams, J., Allendorf, M. D., Balderas-Xicohténcatl, R., Bielewski, M., Edwards, B., Espinal, L., Fultz, B., Hirscher, M., Hudson, M. S. L., Hulvey, Z., Latroche, M., Liu, D., Kapelewski, M., Napolitano, E., Perry, Z. T., Purewal, J., Stavila, V., Veenstra, M., White, J. L., Yuan, Y., Zhou, H., Zlotea, C., Parilla, P.

{ChemPhysChem}, 20(15):1997-2009, Wiley-VCH, Weinheim, Germany, 2019 (article)

mms

DOI [BibTex]

DOI [BibTex]


no image
Hydrogen Energy

Hirscher, M., Autrey, T., Orimo, S.

{ChemPhysChem}, 20, pages: 1153-1411, Wiley-VCH, Weinheim, Germany, 2019 (misc)

mms

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Superior magnetic performance in FePt L10 nanomaterials

Son, K., Ryu, G. H., Jeong, H., Fink, L., Merz, M., Nagel, P., Schuppler, S., Richter, G., Goering, E., Schütz, G.

{Small}, 15(34), Wiley, Weinheim, Germany, 2019 (article)

mms

DOI [BibTex]

DOI [BibTex]


no image
Vizualizing nanoscale spin waves using MAXYMUS

Gräfe, J., Weigand, M., Van Waeyenberge, B., Gangwar, A., Groß, F., Lisiecki, F., Rychly, J., Stoll, H., Träger, N., Förster, J., Stobiecki, F., Dubowik, J., Klos, H., Krwaczyk, M., Back, C. H., Goering, E. J., Schütz, G.

{Proceedings of SPIE}, 11090, SPIE, Bellingham, Washington, 2019 (article)

mms

DOI [BibTex]

DOI [BibTex]