Header logo is


2024


no image
Identifiable Causal Representation Learning

von Kügelgen, J.

University of Cambridge, UK, Cambridge, February 2024, (Cambridge-Tübingen-Fellowship) (phdthesis)

ei

[BibTex]

2024


[BibTex]


Creating a Haptic Empathetic Robot Animal That Feels Touch and Emotion
Creating a Haptic Empathetic Robot Animal That Feels Touch and Emotion

Burns, R.

University of Tübingen, Tübingen, Germany, February 2024, Department of Computer Science (phdthesis)

Abstract
Social touch, such as a hug or a poke on the shoulder, is an essential aspect of everyday interaction. Humans use social touch to gain attention, communicate needs, express emotions, and build social bonds. Despite its importance, touch sensing is very limited in most commercially available robots. By endowing robots with social-touch perception, one can unlock a myriad of new interaction possibilities. In this thesis, I present my work on creating a Haptic Empathetic Robot Animal (HERA), a koala-like robot for children with autism. I demonstrate the importance of establishing design guidelines based on one's target audience, which we investigated through interviews with autism specialists. I share our work on creating full-body tactile sensing for the NAO robot using low-cost, do-it-yourself (DIY) methods, and I introduce an approach to model long-term robot emotions using second-order dynamics.

hi

Project Page [BibTex]

Project Page [BibTex]

2023


no image
Gesture-Based Nonverbal Interaction for Exercise Robots

Mohan, M.

University of Tübingen, Tübingen, Germany, October 2023, Department of Computer Science (phdthesis)

Abstract
When teaching or coaching, humans augment their words with carefully timed hand gestures, head and body movements, and facial expressions to provide feedback to their students. Robots, however, rarely utilize these nuanced cues. A minimally supervised social robot equipped with these abilities could support people in exercising, physical therapy, and learning new activities. This thesis examines how the intuitive power of human gestures can be harnessed to enhance human-robot interaction. To address this question, this research explores gesture-based interactions to expand the capabilities of a socially assistive robotic exercise coach, investigating the perspectives of both novice users and exercise-therapy experts. This thesis begins by concentrating on the user's engagement with the robot, analyzing the feasibility of minimally supervised gesture-based interactions. This exploration seeks to establish a framework in which robots can interact with users in a more intuitive and responsive manner. The investigation then shifts its focus toward the professionals who are integral to the success of these innovative technologies: the exercise-therapy experts. Roboticists face the challenge of translating the knowledge of these experts into robotic interactions. We address this challenge by developing a teleoperation algorithm that can enable exercise therapists to create customized gesture-based interactions for a robot. Thus, this thesis lays the groundwork for dynamic gesture-based interactions in minimally supervised environments, with implications for not only exercise-coach robots but also broader applications in human-robot interaction.

hi

Project Page [BibTex]

2023


Project Page [BibTex]


no image
Learning and Testing Powerful Hypotheses

Kübler, J. M.

University of Tübingen, Germany, July 2023 (phdthesis)

ei

[BibTex]

[BibTex]


no image
Learning Identifiable Representations: Independent Influences and Multiple Views

Gresele, L.

University of Tübingen, Germany, June 2023 (phdthesis)

ei

[BibTex]


no image
Learning with and for discrete optimization

Paulus, M.

(ETH Zurich, Switzerland), May 2023, CLS PhD Program (phdthesis)

ei

[BibTex]

[BibTex]


An Open-Source Modular Treadmill for Dynamic Force Measurement with Load Dependant Range Adjustment
An Open-Source Modular Treadmill for Dynamic Force Measurement with Load Dependant Range Adjustment

Sarvestani, A., Ruppert, F., Badri-Spröwitz, A.

2023 (unpublished) Submitted

Abstract
Ground reaction force sensing is one of the key components of gait analysis in legged locomotion research. To measure continuous force data during locomotion, we present a novel compound instrumented treadmill design. The treadmill is 1.7 m long, with a natural frequency of 170 Hz and an adjustable range that can be used for humans and small robots alike. Here, we present the treadmill’s design methodology and characterize it in its natural frequency, noise behavior and real-life performance. Additionally, we apply an ISO 376 norm conform calibration procedure for all spatial force directions and center of pressure position. We achieve a force accuracy of ≤ 5.6 N for the ground reaction forces and ≤ 13 mm in center of pressure position.

dlg

arXiv link (url) DOI [BibTex]


no image
Natural Language Processing for Policymaking

Jin, Z., Mihalcea, R.

In Handbook of Computational Social Science for Policy, pages: 141-162, 7, (Editors: Bertoni, E. and Fontana, M. and Gabrielli, L. and Signorelli, S. and Vespe, M.), Springer International Publishing, 2023 (inbook)

ei

DOI [BibTex]

DOI [BibTex]


no image
Object-Level Dynamic Scene Reconstruction With Physical Plausibility From RGB-D Images

Strecke, M. F.

Eberhard Karls Universität Tübingen, Tübingen, 2023 (phdthesis)

Abstract
Humans have the remarkable ability to perceive and interact with objects in the world around them. They can easily segment objects from visual data and have an intuitive understanding of how physics influences objects. By contrast, robots are so far often constrained to tailored environments for a specific task, due to their inability to reconstruct a versatile and accurate scene representation. In this thesis, we combine RGB-D video data with background knowledge of real-world physics to develop such a representation for robots.

Our contributions can be separated into two main parts: a dynamic object tracking tool and optimization frameworks that allow for improving shape reconstructions based on physical plausibility. The dynamic object tracking tool "EM-Fusion" detects, segments, reconstructs, and tracks objects from RGB-D video data. We propose a probabilistic data association approach for attributing the image pixels to the different moving objects in the scene. This allows us to track and reconstruct moving objects and the background scene with state-of-the art accuracy and robustness towards occlusions.

We investigate two ways of further optimizing the reconstructed shapes of moving objects based on physical plausibility. The first of these, "Co-Section", includes physical plausibility by reasoning about the empty space around an object. We observe that no two objects can occupy the same space at the same time and that the depth images in the input video provide an estimate of observed empty space. Based on these observations, we propose intersection and hull constraints, which we combine with the observed surfaces in a global optimization approach. Compared to EM-Fusion, which only reconstructs the observed surface, Co-Section optimizes watertight shapes. These watertight shapes provide a rough estimate of unseen surfaces and could be useful as initialization for further refinement, e.g., by interactive perception. In the second optimization approach, "DiffSDFSim", we reason about object shapes based on physically plausible object motion. We observe that object trajectories after collisions depend on the object's shape, and extend a differentiable physics simulation for optimizing object shapes together with other physical properties (e.g., forces, masses, friction) based on the motion of the objects and their interactions. Our key contributions are using signed distance function models for representing shapes and a novel method for computing gradients that models the dependency of the time of contact on object shapes. We demonstrate that our approach recovers target shapes well by fitting to target trajectories and depth observations. Further, the ground-truth trajectories are recovered well in simulation using the resulting shape and physical properties. This enables predictions about the future motion of objects by physical simulation.

We anticipate that our contributions can be useful building blocks in the development of 3D environment perception for robots. The reconstruction of individual objects as in EM-Fusion is a key ingredient required for interactions with objects. Completed shapes as the ones provided by Co-Section provide useful cues for planning interactions like grasping of objects. Finally, the recovery of shape and other physical parameters using differentiable simulation as in DiffSDFSim allows simulating objects and thus predicting the effects of interactions. Future work might extend the presented works for interactive perception of dynamic environments by comparing these predictions with observed real-world interactions to further improve the reconstructions and physical parameter estimations.

ev

link (url) DOI [BibTex]


Synchronizing Machine Learning Algorithms, Realtime Robotic Control and Simulated Environment with o80
Synchronizing Machine Learning Algorithms, Realtime Robotic Control and Simulated Environment with o80

Berenz, V., Widmaier, F., Guist, S., Schölkopf, B., Büchler, D.

Robot Software Architectures Workshop (RSA) 2023, ICRA, 2023 (techreport)

Abstract
Robotic applications require the integration of various modalities, encompassing perception, control of real robots and possibly the control of simulated environments. While the state-of-the-art robotic software solutions such as ROS 2 provide most of the required features, flexible synchronization between algorithms, data streams and control loops can be tedious. o80 is a versatile C++ framework for robotics which provides a shared memory model and a command framework for real-time critical systems. It enables expert users to set up complex robotic systems and generate Python bindings for scientists. o80's unique feature is its flexible synchronization between processes, including the traditional blocking commands and the novel ``bursting mode'', which allows user code to control the execution of the lower process control loop. This makes it particularly useful for setups that mix real and simulated environments.

ei

arxiv poster link (url) [BibTex]


no image
Challenging Common Assumptions in Multi-task Learning

Elich, C., Kirchdorfer, L., Köhler, J. M., Schott, L.

abs/2311.04698, CoRR/arxiv, 2023 (techreport)

ev

paper link (url) [BibTex]

paper link (url) [BibTex]


no image
Static and dynamic investigation of magnonic systems: materials, applications and modeling

Schulz, Frank Martin Ernst

Universität Stuttgart, Stuttgart, 2023 (phdthesis)

mms

link (url) DOI [BibTex]

link (url) DOI [BibTex]

2022


no image
Multi-Timescale Representation Learning of Human and Robot Haptic Interactions

Richardson, B.

University of Stuttgart, Stuttgart, Germany, December 2022, Faculty of Computer Science, Electrical Engineering and Information Technology (phdthesis)

Abstract
The sense of touch is one of the most crucial components of the human sensory system. It allows us to safely and intelligently interact with the physical objects and environment around us. By simply touching or dexterously manipulating an object, we can quickly infer a multitude of its properties. For more than fifty years, researchers have studied how humans physically explore and form perceptual representations of objects. Some of these works proposed the paradigm through which human haptic exploration is presently understood: humans use a particular set of exploratory procedures to elicit specific semantic attributes from objects. Others have sought to understand how physically measured object properties correspond to human perception of semantic attributes. Few, however, have investigated how specific explorations are perceived. As robots become increasingly advanced and more ubiquitous in daily life, they are beginning to be equipped with haptic sensing capabilities and algorithms for processing and structuring haptic information. Traditional haptics research has so far strongly influenced the introduction of haptic sensation and perception into robots but has not proven sufficient to give robots the necessary tools to become intelligent autonomous agents. The work presented in this thesis seeks to understand how single and sequential haptic interactions are perceived by both humans and robots. In our first study, we depart from the more traditional methods of studying human haptic perception and investigate how the physical sensations felt during single explorations are perceived by individual people. We treat interactions as probability distributions over a haptic feature space and train a model to predict how similarly a pair of surfaces is rated, predicting perceived similarity with a reasonable degree of accuracy. Our novel method also allows us to evaluate how individual people weigh different surface properties when they make perceptual judgments. The method is highly versatile and presents many opportunities for further studies into how humans form perceptual representations of specific explorations. Our next body of work explores how to improve robotic haptic perception of single interactions. We use unsupervised feature-learning methods to derive powerful features from raw robot sensor data and classify robot explorations into numerous haptic semantic property labels that were assigned from human ratings. Additionally, we provide robots with more nuanced perception by learning to predict graded ratings of a subset of properties. Our methods outperform previous attempts that all used hand-crafted features, demonstrating the limitations of such traditional approaches. To push robot haptic perception beyond evaluation of single explorations, our final work introduces and evaluates a method to give robots the ability to accumulate information over many sequential actions; our approach essentially takes advantage of object permanence by conditionally and recursively updating the representation of an object as it is sequentially explored. We implement our method on a robotic gripper platform that performs multiple exploratory procedures on each of many objects. As the robot explores objects with new procedures, it gains confidence in its internal representations and classification of object properties, thus moving closer to the marvelous haptic capabilities of humans and providing a solid foundation for future research in this domain.

hi

link (url) Project Page [BibTex]

2022


link (url) Project Page [BibTex]


no image
Towards learning mechanistic models at the right level of abstraction

Neitz, A.

University of Tübingen, Germany, November 2022 (phdthesis)

ei

[BibTex]

[BibTex]


no image
Life Improvement Science

Lieder, F., Prentice, M.

In Encyclopedia of Quality of Life and Well-Being Research, Springer, November 2022 (inbook)

re

[BibTex]

[BibTex]


no image
Learning Causal Representations for Generalization and Adaptation in Supervised, Imitation, and Reinforcement Learning

Lu, C.

University of Cambridge, UK, Cambridge, October 2022, (Cambridge-Tübingen-Fellowship) (phdthesis)

ei

[BibTex]

[BibTex]


no image
Understanding the Influence of Moisture on Fingerpad-Surface Interactions

Nam, S.

University of Tübingen, Tübingen, Germany, October 2022, Department of Computer Science (phdthesis)

Abstract
People frequently touch objects with their fingers. The physical deformation of a finger pressing an object surface stimulates mechanoreceptors, resulting in a perceptual experience. Through interactions between perceptual sensations and motor control, humans naturally acquire the ability to manage friction under various contact conditions. Many researchers have advanced our understanding of human fingers to this point, but their complex structure and the variations in friction they experience due to continuously changing contact conditions necessitate additional study. Moisture is a primary factor that influences many aspects of the finger. In particular, sweat excreted from the numerous sweat pores on the fingerprints modifies the finger's material properties and the contact conditions between the finger and a surface. Measuring changes of the finger's moisture over time and in response to external stimuli presents a challenge for researchers, as commercial moisture sensors do not provide continuous measurements. This dissertation investigates the influence of moisture on fingerpad-surface interactions from diverse perspectives. First, we examine the extent to which moisture on the finger contributes to the sensation of stickiness during contact with glass. Second, we investigate the representative material properties of a finger at three distinct moisture levels, since the softness of human skin varies significantly with moisture. The third perspective is friction; we examine how the contact conditions, including the moisture of a finger, determine the available friction force opposing lateral sliding on glass. Fourth, we have invented and prototyped a transparent in vivo moisture sensor for the continuous measurement of finger hydration. In the first part of this dissertation, we explore how the perceptual intensity of light stickiness relates to the physical interaction between the skin and the surface. We conducted a psychophysical experiment in which nine participants actively pressed their index finger on a flat glass plate with a normal force close to 1.5 N and then detached it after a few seconds. A custom-designed apparatus recorded the contact force vector and the finger contact area during each interaction as well as pre- and post-trial finger moisture. After detaching their finger, participants judged the stickiness of the glass using a nine-point scale. We explored how sixteen physical variables derived from the recorded data correlate with each other and with the stickiness judgments of each participant. These analyses indicate that stickiness perception mainly depends on the pre-detachment pressing duration, the time taken for the finger to detach, and the impulse in the normal direction after the normal force changes sign; finger-surface adhesion seems to build with pressing time, causing a larger normal impulse during detachment and thus a more intense stickiness sensation. We additionally found a strong between-subjects correlation between maximum real contact area and peak pull-off force, as well as between finger moisture and impulse. When a fingerpad presses into a hard surface, the development of the contact area depends on the pressing force and speed. Importantly, it also varies with the finger's moisture, presumably because hydration changes the tissue's material properties. Therefore, for the second part of this dissertation, we collected data from one finger repeatedly pressing a glass plate under three moisture conditions, and we constructed a finite element model that we optimized to simulate the same three scenarios. We controlled the moisture of the subject's finger to be dry, natural, or moist and recorded 15 pressing trials in each condition. The measurements include normal force over time plus finger-contact images that are processed to yield gross contact area. We defined the axially symmetric 3D model's lumped parameters to include an SLS-Kelvin model (spring in series with parallel spring and damper) for the bulk tissue, plus an elastic epidermal layer. Particle swarm optimization was used to find the parameter values that cause the simulation to best match the trials recorded in each moisture condition. The results show that the softness of the bulk tissue reduces as the finger becomes more hydrated. The epidermis of the moist finger model is softest, while the natural finger model has the highest viscosity. In the third part of this dissertation, we focused on friction between the fingerpad and the surface. The magnitude of finger-surface friction available at the onset of full slip is crucial for understanding how the human hand can grip and manipulate objects. Related studies revealed the significance of moisture and contact time in enhancing friction. Recent research additionally indicated that surface temperature may also affect friction. However, previously reported friction coefficients have been measured only in dynamic contact conditions, where the finger is already sliding across the surface. In this study, we repeatedly measured the initial friction before full slip under eight contact conditions with low and high finger moisture, pressing time, and surface temperature. Moisture and pressing time both independently increased finger-surface friction across our population of twelve participants, and the effect of surface temperature depended on the contact conditions. Furthermore, detailed analysis of the recorded measurements indicates that micro stick-slip during the partial-slip phase contributes to enhanced friction. For the fourth and final part of this dissertation, we designed a transparent moisture sensor for continuous measurement of fingerpad hydration. Because various stimuli cause the sweat pores on fingerprints to excrete sweat, many researchers want to quantify the flow and assess its impact on the formation of the contact area. Unfortunately, the most popular sensor for skin hydration is opaque and does not offer continuous measurements. Our capacitive moisture sensor consists of a pair of inter-digital electrodes covered by an insulating layer, enabling impedance measurements across a wide frequency range. This proposed sensor is made entirely of transparent materials, which allows us to simultaneously measure the finger's contact area. Electrochemical impedance spectroscopy identifies the equivalent electrical circuit and the electrical component parameters that are affected by the amount of moisture present on the surface of the sensor. Most notably, the impedance at 1 kHz seems to best reflect the relative amount of sweat.

hi

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Methods for Minimizing the Spread of Misinformation on the Web

Tabibian, B.

University of Tübingen, Germany, September 2022 (phdthesis)

ei

[BibTex]

[BibTex]


no image
Learning and Using Causal Knowledge: A Further Step Towards a Higher-Level Intelligence

Huang, B.

Carnegie Mellon University, Pittsburgh, USA, July 2022 (phdthesis)

ei

[BibTex]

[BibTex]


no image
Variational Inference in Dynamical Systems

Ialongo, A.

University of Cambridge, UK, Cambridge, February 2022, (Cambridge-Tübingen-Fellowship) (phdthesis)

ei

[BibTex]

[BibTex]


no image
Observability Analysis of Visual-Inertial Odometry with Online Calibration of Velocity-Control Based Kinematic Motion Models

Li, H., Stueckler, J.

abs/2204.06651, CoRR/arxiv, 2022 (techreport)

Abstract
In this paper, we analyze the observability of the visual-inertial odometry (VIO) using stereo cameras with a velocity-control based kinematic motion model. Previous work shows that in general case the global position and yaw are unobservable in VIO system, additionally the roll and pitch become also unobservable if there is no rotation. We prove that by integrating a planar motion constraint roll and pitch become observable. We also show that the parameters of the motion model are observable.

ev

link (url) [BibTex]


no image
Entwicklung von Methoden und Bausteinen zur Realisierung Komplexer Magnonischer Systeme

Groß, F.

Universität Stuttgart, Stuttgart (und Cuvillier Verlag, Göttingen), 2022 (phdthesis)

mms

link (url) [BibTex]

link (url) [BibTex]


no image
Machine-Learning-Driven Haptic Sensor Design

Sun, H.

University of Tuebingen, Library, 2022 (phdthesis)

Abstract
Similar to biological systems, robots may need skin-like sensing ability to perceive interactions in complex, changing, and human-involved environments. Current skin-like sensing technologies are still far behind their biological counterparts when considering resolution, dynamics range, robustness, and surface coverage together. One key challenge is the wiring of sensing elements. During my Ph.D. study, I explore how machine learning can enable the design of a new kind of haptic sensors to deal with such a challenge. On the one hand, I propose super-resolution-oriented tactile skins, reducing the number of physical sensing elements while achieving high spatial accuracy. On the other hand, I explore vision-based haptic sensor designs. In this thesis, I present four types of machine-learning-driven haptic sensors that I designed for coarse and fine robotic applications, varying from large surface (robot limbs) to small surface sensing (robot fingers). Moreover, I propose a super-resolution theory to guide sensor designs at all levels ranging from hardware design (material/structure/transduction), data collection (real/simulated), and signal processing methods (analytical/data-driven). I investigate two designs for large-scale coarse-resolution sensing, e.g., robotic limbs. HapDef sparsely attaches a few strain gauges on a large curved surface internally to measure the deformation over the whole surface. ERT-DNN wraps a large surface with a piece of multi-layered conductive fabric, which varies its conductivity upon contacts exerted. I also conceive two approaches for small-scale fine-resolution sensing, e.g., robotic fingertips. BaroDome sparsely embeds a few barometers inside a soft elastomer to measure internal pressure changes caused by external contact. Insight encloses a high-resolution camera to view a soft shell from within. Generically, an inverse problem needs to be solved when trying to obtain high-resolution sensing with a few physical sensing elements. I develop machine-learning frameworks suitable for solving this inverse problem. They process various raw sensor data and extract useful haptic information in practice. Machine learning methods rely on data collected by an automated robotic stimulation device or synthesized using finite element methods. I build several physical testbeds and finite element models to collect copious data. I propose machine learning frameworks to combine data from different sources that are good enough to deal with the noise in real data and generalize well from seen to unseen situations. While developing my prototype sensors, I have faced reoccurring design choices. To help my developments and guide future research, I propose a unified theory with the concept of taxel-value-isolines. It captures the physical effects required for super-resolution, ties them to all parts of the sensor design, and allows us to assess them quantitatively. The theory offers an explanation about physically achievable accuracies for localizing and quantifying contact based on uncertainties introduced by measurement noise in sensor elements. The theoretical analysis aims to predict the best performance before a physical prototype is built and helps to evaluate the hardware design, data collection, and data processing methods during implementation. This thesis presents a new perspective on haptic sensor design. Using machine learning to substitute the entire data-processing pipeline, I present several haptic sensor designs for applications ranging from large-surface skins to high-resolution tactile fingertip sensors. The developed theory for obtaining optimal super-resolution can guide future sensor designs.

al

link (url) [BibTex]

link (url) [BibTex]


no image
Causal Models for Dynamical Systems

Peters, J., Bauer, S., Pfister, N.

In Probabilistic and Causal Inference: The Works of Judea Pearl, pages: 671-690, 1, Association for Computing Machinery, 2022 (inbook)

ei

arXiv DOI [BibTex]

arXiv DOI [BibTex]


no image
Towards Causal Algorithmic Recourse

Karimi, A. H., von Kügelgen, J., Schölkopf, B., Valera, I.

In xxAI - Beyond Explainable AI: International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers, pages: 139-166, (Editors: Holzinger, Andreas and Goebel, Randy and Fong, Ruth and Moon, Taesup and Müller, Klaus-Robert and Samek, Wojciech), Springer International Publishing, 2022 (inbook)

ei plg

DOI [BibTex]

DOI [BibTex]


no image
Causality for Machine Learning

Schölkopf, B.

In Probabilistic and Causal Inference: The Works of Judea Pearl, pages: 765-804, 1, Association for Computing Machinery, New York, NY, USA, 2022 (inbook)

ei

arXiv DOI [BibTex]

arXiv DOI [BibTex]


no image
CLEVR-X: A Visual Reasoning Dataset for Natural Language Explanations

Salewski, L., Koepke, A. S., Lensch, H. P. A., Akata, Z.

In xxAI - Beyond Explainable AI: International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers, pages: 69-88, (Editors: Holzinger, Andreas and Goebel, Randy and Fong, Ruth and Moon, Taesup and Müller, Klaus-Robert and Samek, Wojciech), Springer International Publishing, 2022 (inbook)

ei

DOI [BibTex]

DOI [BibTex]

2021


Magnetic Micro-/Nanopropellers  for Biomedicine
Magnetic Micro-/Nanopropellers for Biomedicine

Qiu, T., Jeong, M., Goyal, R., Kadiri, V., Sachs, J., Fischer, P.

In Field-Driven Micro and Nanorobots for Biology and Medicine, pages: 389-410, 16, (Editors: Sun, Y. and Wang, X. and Yu, J.), Springer Nature, November 2021 (inbook)

Abstract
In nature, many bacteria swim by rotating their helical flagella. A particularly promising class of artificial micro- and nano-robots mimic this propeller-like propulsion mechanism to move through fluids and tissues for applications in minimally-invasive medicine. Several fundamental challenges have to be overcome in order to build micro-machines that move similar to bacteria for in vivo applications. Here, we review recent advances of magnetically-powered micro-/nano-propellers. Four important aspects of the propellers – the geometrical shape, the fabrication method, the generation of magnetic fields for actuation, and the choice of biocompatible magnetic materials – are highlighted. First, the fundamental requirements are elucidated that arise due to hydrodynamics at low Reynolds (Re) number. We discuss the role that the propellers’ shape and symmetry play in realizing effective propulsion at low Re. Second, the additive nano-fabrication method Glancing Angle Deposition is discussed as a versatile technique to quickly grow large numbers of designer nano-helices. Third, systems to generate rotating magnetic fields via permanent magnets or electromagnetic coils are presented. And finally, the biocompatibility of the magnetic materials is discussed. Iron-platinum is highlighted due to its biocompatibility and its superior magnetic properties, which is promising for targeted delivery, minimally-invasive magnetic nano-devices and biomedical applications.

pf

link (url) DOI [BibTex]

2021


link (url) DOI [BibTex]


no image
Models for Data-Efficient Reinforcement Learning on Real-World Applications

Doerr, A.

University of Stuttgart, Stuttgart, October 2021 (phdthesis)

ics

DOI [BibTex]

DOI [BibTex]


no image
Dynamics of Learning and Learning of Dynamics

Mehrjou, A.

ETH Zürich, Zürich, October 2021 (phdthesis)

ei

DOI [BibTex]

DOI [BibTex]


no image
A Large Scale Brain-Computer Interface for Patients with Neurological Diseases

Hohmann, M.

University of Tübingen, Germany, September 2021 (phdthesis)

ei

[BibTex]

[BibTex]


no image
Deep Learning Beyond The Training Distribution

Parascandolo, G.

ETH Zürich, Switzerland, Zürich, September 2021, (CLS Fellowship Program) (phdthesis)

ei

DOI [BibTex]

DOI [BibTex]


no image
Electriflow: Augmenting Books With Tangible Animation Using Soft Electrohydraulic Actuators

Purnendu, , Novack, S., Acome, E., Alistar, M., Keplinger, C., Gross, M. D., Bruns, C., Leithinger, D.

In ACM SIGGRAPH 2021 Labs, pages: 1-2, Association for Computing Machinery, SIGGRAPH 2021, August 2021 (inbook)

Abstract
We present Electriflow: a method of augmenting books with tangible animation employing soft electrohydraulic actuators. These actuators are compact, silent and fast in operation, and can be fabricated with commodity materials. They generate an immediate hydraulic force upon electrostatic activation without an external fluid supply source, enabling a simple and self-contained design. Electriflow actuators produce an immediate shape transition from flat to folded state which enabled their seamless integration into books. For the Emerging Technologies exhibit, we will demonstrate the prototype of a book augmented with the capability of tangible animation.

rm

Supplemental Material link (url) DOI [BibTex]

Supplemental Material link (url) DOI [BibTex]


Huggie{B}ot: An Interactive Hugging Robot With Visual and Haptic Perception
HuggieBot: An Interactive Hugging Robot With Visual and Haptic Perception

Block, A. E.

ETH Zürich, Zürich, August 2021, Department of Computer Science (phdthesis)

Abstract
Hugs are one of the first forms of contact and affection humans experience. Receiving a hug is one of the best ways to feel socially supported, and the lack of social touch can have severe adverse effects on an individual's well-being. Due to the prevalence and health benefits of hugging, roboticists are interested in creating robots that can hug humans as seamlessly as humans hug other humans. However, hugs are complex affective interactions that need to adapt to the height, body shape, and preferences of the hugging partner, and they often include intra-hug gestures like squeezes. This dissertation aims to create a series of hugging robots that use visual and haptic perception to provide enjoyable interactive hugs. Each of the four presented HuggieBot versions is evaluated by measuring how users emotionally and behaviorally respond to hugging it; HuggieBot 4.0 is explicitly compared to a human hugging partner using physiological measures. Building on research both within and outside of human-robot interaction (HRI), this thesis proposes eleven tenets of natural and enjoyable robotic hugging. These tenets were iteratively crafted through a design process combining user feedback and experimenter observation, and they were evaluated through user studies. A good hugging robot should (1) be soft, (2) be warm, (3) be human-sized, (4) autonomously invite the user for a hug when it detects someone in its personal space, and then it should wait for the user to begin walking toward it before closing its arms to ensure a consensual and synchronous hugging experience. It should also (5) adjust its embrace to the user's size and position, (6) reliably release when the user wants to end the hug, and (7) perceive the user's height and adapt its arm positions accordingly to comfortably fit around the user at appropriate body locations. Finally, a hugging robot should (8) accurately detect and classify gestures applied to its torso in real time, regardless of the user's hand placement, (9) respond quickly to their intra-hug gestures, (10) adopt a gesture paradigm that blends user preferences with slight variety and spontaneity, and (11) occasionally provide unprompted, proactive affective social touch to the user through intra-hug gestures. We believe these eleven tenets are essential to delivering high-quality robot hugs. Their presence results in a hug that pleases the user, and their absence results in a hug that is likely to be inadequate. We present these tenets as guidelines for future hugging robot creators to follow when designing new hugging robots to ensure user acceptance. We tested the four versions of HuggieBot through six user studies. First, we analyzed data collected in a previous study with a modified Willow Garage Personal Robot 2 (PR2) to evaluate human responses to different robot physical characteristics and hugging behaviors. Participants experienced and evaluated twelve hugs with the robot, divided into three randomly ordered trials that focused on physical robot characteristics (single factor, three levels) and nine randomly ordered trials with low, medium, and high hug pressure and duration (two factors, three levels each). Second, we created an entirely new robotic platform, HuggieBot 2.0, according to our first six tenets. The new platform features a soft, warm, inflated body (HuggieChest) and uses visual and haptic sensing to deliver closed-loop hugging. We first verified the outward appeal of this platform compared to the previous PR2-based HuggieBot 1.0 via an online video-watching study involving 117 users. We then conducted an in-person experiment in which 32 users each exchanged eight hugs with HuggieBot 2.0, experiencing all combinations of visual hug initiation, haptic sizing, and haptic releasing. We then refine the original fourth tenet (visually perceive its user) and present the remaining five tenets for designing interactive hugging robots; we validate the full list of eleven tenets through more in-person studies with our custom robot. To enable perceptive and pleasing autonomous robot behavior, we investigated robot responses to four human intra-hug gestures: holding, rubbing, patting, and squeezing. The robot's inflated torso's microphone and pressure sensor collected data of 32 people repeatedly demonstrating these gestures, which were used to develop a perceptual algorithm that classifies user actions with 88% accuracy. From user preferences, we created a probabilistic behavior algorithm that chooses robot responses in real time. We implemented improvements to the robot platform to create a third version of our robot, HuggieBot 3.0. We then validated its gesture perception system and behavior algorithm in a fifth user study with 16 users. Finally, we refined the quality and comfort of the embrace by adjusting the joint torques and joint angles of the closed pose position, we further improved the robot's visual perception to detect changes in user approach, we upgraded the robot's response to users who do not press on its back, and we had the robot respond to all intra-hug gestures with squeezes to create our final version of the robotic platform, HuggieBot 4.0. In our sixth user study, we investigated the emotional and physiological effects of hugging a robot compared to the effects of hugging a friendly but unfamiliar person. We continuously monitored participant heart rate and collected saliva samples at seven time points across the 3.5-hour study to measure the temporal evolution of cortisol and oxytocin. We used an adapted Trier Social Stress Test (TSST) protocol to reliably and ethically induce stress in the participants. They then experienced one of five different hug intervention methods before all interacting with HuggieBot 4.0. The results of these six user studies validated our eleven hugging tenets and informed the iterative design of HuggieBot. We see that users enjoy robot softness, robot warmth, and being physically squeezed by the robot. Users dislike being released too soon from a hug and equally dislike being held by the robot for too long. Adding haptic reactivity definitively improves user perception of a hugging robot; the robot's responses and proactive intra-hug gestures were greatly enjoyed. In our last study, we learned that HuggieBot can positively affect users on a physiological level and is somewhat comparable to hugging a person. Participants have more favorable opinions about hugging robots after prolonged interaction with HuggieBot in all of our research studies.

hi

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Chemically active micromotors
Chemically active micromotors

Yu, T.

University of Stuttgart, Stuttgart, July 2021 (phdthesis)

Abstract
Motion is a mark of living systems. It is realised by energy conversion to perform vital tasks and is thus of great importance for all living systems. One approach to achieve motion is by including active motion of micro/nano objects. Unlike in the fluid at the macro scale, active swimming cannot be achieved by reciprocal movements at the micro scale. Breaking symmetry at the micro scale thus becomes a critical issue. The challenge is that this often requires outside intervention to build systems that already show symmetry breaking. And another challenge is that there are few examples where active microscale motion can cause a macroscopic effect, or facilitate a useful application. In the first part of the thesis, the first challenge is addressed and a new route of spontaneous symmetry breaking is developed. Microscale motion in artificial chemical systems has thus far been realised in chemical motion. These are microparticles that are fabricated to possess two different halves, known as Janus particles. One half is catalytically active and drives the self-phoretics. The Janus micromotors are generally fabricated using fabrication techniques such as PVD, CVD. These techniques require deposition onto a surface, which limit the number of structures that can be fabricated. In this work, we show that two species of isotropic (symmetric) micro particles, one is a photocatalytically active particle TiO2, the other is a passive SiO2 particle can spontaneously form a dimer structure. Under UV illumination, a chemical gradient is generated around the photo active particles. The passive particle is attracted toward the highest chemical concentration of the reaction product towards the active particle. A dimer forms that starts to self-propel. The speed of the dimer can be controlled by adjusting the UV intensity. The mechanism of the dimer formation is examined and shown to be due to a diffusiophoretic interaction between the active and the passive particle. The interaction force and the propulsion of the dimer swimmers are examined. The role of salts, particle size and concentration are studied. An additional repulsion interaction is observed between two active particles. An optimal volumetric particle density of ≤ 2% is identified for dimer formation and the dimers remain active for > 20s. This thesis thereby demonstrates a self-assembly route where the chemical activity causes dimer formation and thus spontaneous symmetry breaking which does not require any physical fabrication steps. Most work thus far has studied the behaviour of individual chemical micromotors (Janus particles) at the micro scale. To induce a macroscopic effect and facilitate an application using individual micro/nano active particles requires cooperative effects of many "micromotors". Therefore, we developed a novel fabrication method which allows a large number of Janus structures to be assembled in an ordered manner. We fabricated an array of photoactive Janus micro structures on a surface by glancing angle deposition (GLAD) onto a photolithography patterned substrate. Illuminating the surface of Janus array structures with UV light initiates the water splitting reaction, which produces an osmotic flow around the micro structures. The osmotic flow at each structure is coupled with the flows generated by the neighbouring particles. The microscopic osmotic flow thereby results in a macroscopic fluid flow. By adjusting the spacing between single micro structures, an optimised pumping velocity is achieved with a micro pillar diameter of 2 μm and a spacing of ∼ 2 μm. We compared the pumping performance of the micro pillar array with other topological chemical structures, such as micro Janus bar arrays and 2D micro Janus disk arrays, and find that the 3D structure is essential to generate a chemical gradient on the surface. We believe that this is the first chemical micropump formed by chemically active Janus structures. The active pumping surface can provide a flow speed of up to 4 μm/s. This active surface consisting of micropillar arrays can be easily integrated in most microchannels and serve as an on-board micropump. A theoretical model and numerical simulations are presented to describe the microchannel pumping. The theory reproduces the experimentally measured flow profiles very well. We have thus established a new type of chemical pump, which can wirelessly pump fluid in a microchannel, and the pumping volume rate and flow profile can be modified simply by changing the nature and orientation of the self-pumping walls.

pf

DOI [BibTex]

DOI [BibTex]


no image
Röntgenmikroskopische Untersuchungen der Magnetisierungsdynamik in nanoskaligen magnonischen Wellenleiterstrukturen

Träger, N.

Universität Stuttgart, Stuttgart (und Cuvillier Verlag, Göttingen), June 2021 (phdthesis)

mms

[BibTex]

[BibTex]


no image
Optimization Algorithms for Machine Learning

Raj, A.

University of Tübingen, Germany, June 2021 (phdthesis)

ei

[BibTex]

[BibTex]


no image
Causal Inference in Vision

Meding, K.

Eberhard Karls Universität Tübingen, Tübingen, June 2021 (phdthesis)

ei

[BibTex]

[BibTex]


Toward a Science of Effective Well-Doing
Toward a Science of Effective Well-Doing

Lieder, F., Prentice, M., Corwin-Renner, E.

May 2021 (techreport)

Abstract
Well-doing, broadly construed, encompasses acting and thinking in ways that contribute to humanity’s flourishing in the long run. This often takes the form of setting a prosocial goal and pursuing it over an extended period of time. To set and pursue goals in a way that is extremely beneficial for humanity (effective well-doing), people often have to employ critical thinking and far-sighted, rational decision-making in the service of the greater good. To promote effective well-doing, we need to better understand its determinants and psychological mechanisms, as well as the barriers to effective well-doing and how they can be overcome. In this article, we introduce a taxonomy of different forms of well-doing and introduce a conceptual model of the cognitive mechanisms of effective well-doing. We view effective well-doing as the upper end of a moral continuum whose lower half comprises behaviors that are harmful to humanity (ill-doing), and we argue that the capacity for effective well-doing has to be developed through personal growth (e.g., learning how to pursue goals effectively). Research on these phenomena has so far been scattered across numerous disconnected literatures from multiple disciplines. To bring these communities together, we call for the establishment of a transdisciplinary research field focussed on understanding and promoting effective well-doing and personal growth as well as understanding and reducing ill-doing. We define this research field in terms of its goals and questions. We review what is already known about these questions in different disciplines and argue that laying the scientific foundation for promoting effective well-doing is one of the most valuable contributions that the behavioral sciences can make in the 21st century.

re

Preprint Project Page [BibTex]


no image
Machine Learning Methods for Modeling Synthesizable Molecules

Bradshaw, J.

University of Cambridge, UK, Cambridge, April 2021, (Cambridge-Tübingen-Fellowship) (phdthesis)

ei

DOI [BibTex]

DOI [BibTex]


Advanced Diffusion Studies of Active Enzymes and Nanosystems
Advanced Diffusion Studies of Active Enzymes and Nanosystems

Günther, J.

Universität Stuttgart, Stuttgart (und Cuvillier Verlag, Göttingen), February 2021 (phdthesis)

Abstract
Enzymes are fascinating chemical nanomachines that catalyze many reactions, which are essential for life. Studying enzymes is therefore important in a biological and medical context, but the catalytic potential of enzymes also finds use in organic synthesis. This thesis is concerned with the fundamental question whether the catalytic reaction of an enzyme or molecular catalyst can cause it to show enhanced diffusion. Diffusion measurements were performed with advanced fluorescence correlation spectroscopy (FCS) and diffusion nuclear magnetic resonance (NMR) spectroscopy techniques. The measurement results lead to the unraveling of artefacts in enzyme FCS and molecular NMR measurements, and thus seriously question several recent publications, which claim that enzymes and molecular catalysts are active matter and experience enhanced diffusion. In addition to these fundamental questions, this thesis also examines the use of enzymes as biocatalysts. A novel nanoconstruct – the enzyme-phage-colloid (E-P-C) – is presented, which utilizes filamentous viruses as immobilization templates for enzymes. E-P-Cs can be used for biocatalysis with convenient magnetic recovery of enzymes and serve as enzymatic micropumps. The latter can autonomously pump blood at physiological urea concentrations.

pf

link (url) [BibTex]

link (url) [BibTex]


no image
Delivering Expressive and Personalized Fingertip Tactile Cues

Young, E. M.

University of Pennsylvania, Philadelphia, PA, December 2020, Department of Mechanical Engineering and Applied Mechanics (phdthesis)

Abstract
Wearable haptic devices have seen growing interest in recent years, but providing realistic tactile feedback is not a challenge that is soon to be solved. Daily interac- tions with physical objects elicit complex sensations at the fingertips. Furthermore, human fingertips exhibit a broad range of physical dimensions and perceptive abilities, adding increased complexity to the task of simulating haptic interactions in a compelling manner. However, as the applications of wearable haptic feedback grow, concerns of wearability and generalizability often persuade tactile device designers to simplify the complexities associated with rendering realistic haptic sensations. As such, wearable devices tend to be optimized for particular uses and average users, rendering only the most salient dimensions of tactile feedback for a given task and assuming all users interpret the feedback in a similar fashion. We propose that providing more realistic haptic feedback will require in-depth examinations of higher-dimensional tactile cues and personalization of these cues for individual users. In this thesis, we aim to provide hardware and software-based solutions for rendering more expressive and personalized tactile cues to the fingertip. We first explore the idea of rendering six-degree-of-freedom (6-DOF) tactile fingertip feedback via a wearable device, such that any possible fingertip interaction with a flat surface can be simulated. We highlight the potential of parallel continuum manipulators (PCMs) to meet the requirements of such a device, and we refine the design of a PCM for providing fingertip tactile cues. We construct a manually actuated prototype to validate the concept, and then continue to develop a motorized version, named the Fingertip Puppeteer, or Fuppeteer for short. Various error reduction techniques are presented, and the resulting device is evaluated by analyzing system responses to step inputs, measuring forces rendered to a biomimetic finger sensor, and comparing intended sensations to perceived sensations of twenty-four participants in a human-subject study. Once the functionality of the Fuppeteer is validated, we begin to explore how the device can be used to broaden our understanding of higher-dimensional tactile feedback. One such application is using the 6-DOF device to simulate different lower-dimensional devices. We evaluate 1-, 3-, and 6-DOF tactile feedback during shape discrimination and mass discrimination in a virtual environment, also comparing to interactions with real objects. Results from 20 naive study participants show that higher-dimensional tactile feedback may indeed allow completion of a wider range of virtual tasks, but that feedback dimensionality surprisingly does not greatly affect the exploratory techniques employed by the user. To address alternative approaches to improving tactile rendering in scenarios where low-dimensional tactile feedback is appropriate, we then explore the idea of personalizing feedback for a particular user. We present two software-based approaches to personalize an existing data-driven haptic rendering algorithm for fingertips of different sizes. We evaluate our algorithms in the rendering of pre-recorded tactile sensations onto rubber casts of six different fingertips as well as onto the real fingertips of 13 human participants, all via a 3-DOF wearable device. Results show that both personalization approaches significantly reduced force error magnitudes and improved realism ratings.

hi

Project Page [BibTex]

Project Page [BibTex]


no image
Causal Feature Selection in Neuroscience

Mastakouri, A.

University of Tübingen, Germany, December 2020 (phdthesis)

ei

link (url) [BibTex]

link (url) [BibTex]