Header logo is


2024


Being Neurodivergent in Academia: Autistic and abroad
Being Neurodivergent in Academia: Autistic and abroad

Schulz, A.

eLife, 13, March 2024 (article)

Abstract
An AuDHD researcher recounts the highs and lows of relocating from the United States to Germany for his postdoc.

hi

DOI [BibTex]


{IMU}-Based Kinematics Estimation Accuracy Affects Gait Retraining Using Vibrotactile Cues
IMU-Based Kinematics Estimation Accuracy Affects Gait Retraining Using Vibrotactile Cues

Rokhmanova, N., Pearl, O., Kuchenbecker, K. J., Halilaj, E.

IEEE Transactions on Neural Systems and Rehabilitation Engineering, 32, pages: 1005-1012, February 2024 (article)

Abstract
Wearable sensing using inertial measurement units (IMUs) is enabling portable and customized gait retraining for knee osteoarthritis. However, the vibrotactile feedback that users receive directly depends on the accuracy of IMU-based kinematics. This study investigated how kinematic errors impact an individual's ability to learn a therapeutic gait using vibrotactile cues. Sensor accuracy was computed by comparing the IMU-based foot progression angle to marker-based motion capture, which was used as ground truth. Thirty subjects were randomized into three groups to learn a toe-in gait: one group received vibrotactile feedback during gait retraining in the laboratory, another received feedback outdoors, and the control group received only verbal instruction and proceeded directly to the evaluation condition. All subjects were evaluated on their ability to maintain the learned gait in a new outdoor environment. We found that subjects with high tracking errors exhibited more incorrect responses to vibrotactile cues and slower learning rates than subjects with low tracking errors. Subjects with low tracking errors outperformed the control group in the evaluation condition, whereas those with higher error did not. Errors were correlated with foot size and angle magnitude, which may indicate a non-random algorithmic bias. The accuracy of IMU-based kinematics has a cascading effect on feedback; ignoring this effect could lead researchers or clinicians to erroneously classify a patient as a non-responder if they did not improve after retraining. To use patient and clinician time effectively, future implementation of portable gait retraining will require assessment across a diverse range of patients.

hi

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
How Should Robots Exercise with People? Robot-Mediated Exergames Win with Music, Social Analogues, and Gameplay Clarity

Fitter, N. T., Mohan, M., Preston, R. C., Johnson, M. J., Kuchenbecker, K. J.

Frontiers in Robotics and AI, 10(1155837):1-18, January 2024 (article)

Abstract
The modern worldwide trend toward sedentary behavior comes with significant health risks. An accompanying wave of health technologies has tried to encourage physical activity, but these approaches often yield limited use and retention. Due to their unique ability to serve as both a health-promoting technology and a social peer, we propose robots as a game-changing solution for encouraging physical activity. This article analyzes the eight exergames we previously created for the Rethink Baxter Research Robot in terms of four key components that are grounded in the video-game literature: repetition, pattern matching, music, and social design. We use these four game facets to assess gameplay data from 40 adult users who each experienced the games in balanced random order. In agreement with prior research, our results show that relevant musical cultural references, recognizable social analogues, and gameplay clarity are good strategies for taking an otherwise highly repetitive physical activity and making it engaging and popular among users. Others who study socially assistive robots and rehabilitation robotics can benefit from this work by considering the presented design attributes to generate future hypotheses and by using our eight open-source games to pursue follow-up work on social-physical exercise with robots.

hi

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Robust Surface Recognition with the Maximum Mean Discrepancy: Degrading Haptic-Auditory Signals through Bandwidth and Noise

Khojasteh, B., Shao, Y., Kuchenbecker, K. J.

IEEE Transactions on Haptics, pages: 1-8, January 2024 (article)

Abstract
Sliding a tool across a surface generates rich sensations that can be analyzed to recognize what is being touched. However, the optimal configuration for capturing these signals is yet unclear. To bridge this gap, we consider haptic-auditory data as a human explores surfaces with different steel tools, including accelerations of the tool and finger, force and torque applied to the surface, and contact sounds. Our classification pipeline uses the maximum mean discrepancy (MMD) to quantify differences in data distributions in a high-dimensional space for inference. With recordings from three hemispherical tool diameters and ten diverse surfaces, we conducted two degradation studies by decreasing sensing bandwidth and increasing added noise. We evaluate the haptic-auditory recognition performance achieved with the MMD to compare newly gathered data to each surface in our known library. The results indicate that acceleration signals alone have great potential for high-accuracy surface recognition and are robust against noise contamination. The optimal accelerometer bandwidth exceeds 1000 Hz, suggesting that useful vibrotactile information extends beyond human perception range. Finally, smaller tool tips generate contact vibrations with better noise robustness. The provided sensing guidelines may enable superhuman performance in portable surface recognition, which could benefit quality control, material documentation, and robotics.

hi

DOI Project Page [BibTex]


InterCap: Joint Markerless 3D Tracking of Humans and Objects in Interaction from Multi-view RGB-D Images
InterCap: Joint Markerless 3D Tracking of Humans and Objects in Interaction from Multi-view RGB-D Images

Huang, Y., Taheri, O., Black, M. J., Tzionas, D.

International Journal of Computer Vision (IJCV), 2024 (article)

Abstract
Humans constantly interact with objects to accomplish tasks. To understand such interactions, computers need to reconstruct these in 3D from images of whole bodies manipulating objects, e.g., for grasping, moving and using the latter. This involves key challenges, such as occlusion between the body and objects, motion blur, depth ambiguities, and the low image resolution of hands and graspable object parts. To make the problem tractable, the community has followed a divide-and-conquer approach, focusing either only on interacting hands, ignoring the body, or on interacting bodies, ignoring the hands. However, these are only parts of the problem. On the contrary, recent work focuses on the whole problem. The GRAB dataset addresses whole-body interaction with dexterous hands but captures motion via markers and lacks video, while the BEHAVE dataset captures video of body-object interaction but lacks hand detail. We address the limitations of prior work with InterCap, a novel method that reconstructs interacting whole-bodies and objects from multi-view RGB-D data, using the parametric whole-body SMPL-X model and known object meshes. To tackle the above challenges, InterCap uses two key observations: (i) Contact between the body and object can be used to improve the pose estimation of both. (ii) Consumer-level Azure Kinect cameras let us set up a simple and flexible multi-view RGB-D system for reducing occlusions, with spatially calibrated and temporally synchronized cameras. With our InterCap method we capture the InterCap dataset, which contains 10 subjects (5 males and 5 females) interacting with 10 daily objects of various sizes and affordances, including contact with the hands or feet. To this end, we introduce a new data-driven hand motion prior, as well as explore simple ways for automatic contact detection based on 2D and 3D cues. In total, InterCap has 223 RGB-D videos, resulting in 67,357 multi-view frames, each containing 6 RGB-D images, paired with pseudo ground-truth 3D body and object meshes. Our InterCap method and dataset fill an important gap in the literature and support many research directions. Data and code are available at https://intercap.is.tue.mpg.de.

ps

Paper link (url) DOI [BibTex]


no image
Machine learning of a density functional for anisotropic patchy particles

Simon, A., Weimar, J., Martius, G., Oettel, M.

Journal of Chemical Theory and Computation, 2024 (article)

al

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Soft Sub-Structured Multi-Material Biosensor Hydrogels with Enzymes Retained by Plant Viral Scaffolds
Soft Sub-Structured Multi-Material Biosensor Hydrogels with Enzymes Retained by Plant Viral Scaffolds

Grübel, J., Wendlandt, T., Urban, D., Jauch, C. O., Wege, C., Tovar, G. E. M., Southan, A.

Macromolecular Bioscience, 24(3), Wiley, 2024 (article)

zwe-csfm

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Event-based Non-Rigid Reconstruction of Low-Rank Parametrized Deformations from Contours

Xue, Y., Li, H., Leutenegger, S., Stueckler, J.

International Journal of Computer Vision (IJCV), 2024 (article)

Abstract
Visual reconstruction of fast non-rigid object deformations over time is a challenge for conventional frame-based cameras. In recent years, event cameras have gained significant attention due to their bio-inspired properties, such as high temporal resolution and high dynamic range. In this paper, we propose a novel approach for reconstructing such deformations using event measurements. Under the assumption of a static background, where all events are generated by the motion, our approach estimates the deformation of objects from events generated at the object contour in a probabilistic optimization framework. It associates events to mesh faces on the contour and maximizes the alignment of the line of sight through the event pixel with the associated face. In experiments on synthetic and real data of human body motion, we demonstrate the advantages of our method over state-of-the-art optimization and learning-based approaches for reconstructing the motion of human arms and hands. In addition, we propose an efficient event stream simulator to synthesize realistic event data for human motion.

ev

DOI [BibTex]

DOI [BibTex]


HMP: Hand Motion Priors for Pose and Shape Estimation from Video
HMP: Hand Motion Priors for Pose and Shape Estimation from Video

Duran, E., Kocabas, M., Choutas, V., Fan, Z., Black, M. J.

Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024 (article)

ps

webpage pdf code [BibTex]

webpage pdf code [BibTex]


{FLARE}: Fast learning of Animatable and Relightable Mesh Avatars
FLARE: Fast learning of Animatable and Relightable Mesh Avatars

Bharadwaj, S., Zheng, Y., Hilliges, O., Black, M. J., Abrevaya, V. F.

ACM Transactions on Graphics, 42(6):204:1-204:15, December 2023 (article) Accepted

Abstract
Our goal is to efficiently learn personalized animatable 3D head avatars from videos that are geometrically accurate, realistic, relightable, and compatible with current rendering systems. While 3D meshes enable efficient processing and are highly portable, they lack realism in terms of shape and appearance. Neural representations, on the other hand, are realistic but lack compatibility and are slow to train and render. Our key insight is that it is possible to efficiently learn high-fidelity 3D mesh representations via differentiable rendering by exploiting highly-optimized methods from traditional computer graphics and approximating some of the components with neural networks. To that end, we introduce FLARE, a technique that enables the creation of animatable and relightable mesh avatars from a single monocular video. First, we learn a canonical geometry using a mesh representation, enabling efficient differentiable rasterization and straightforward animation via learned blendshapes and linear blend skinning weights. Second, we follow physically-based rendering and factor observed colors into intrinsic albedo, roughness, and a neural representation of the illumination, allowing the learned avatars to be relit in novel scenes. Since our input videos are captured on a single device with a narrow field of view, modeling the surrounding environment light is non-trivial. Based on the split-sum approximation for modeling specular reflections, we address this by approximating the pre-filtered environment map with a multi-layer perceptron (MLP) modulated by the surface roughness, eliminating the need to explicitly model the light. We demonstrate that our mesh-based avatar formulation, combined with learned deformation, material, and lighting MLPs, produces avatars with high-quality geometry and appearance, while also being efficient to train and render compared to existing approaches.

ps

Paper Project Page Code DOI [BibTex]

Paper Project Page Code DOI [BibTex]


From Skin to Skeleton: Towards Biomechanically Accurate {3D} Digital Humans
From Skin to Skeleton: Towards Biomechanically Accurate 3D Digital Humans

(Honorable Mention for Best Paper)

Keller, M., Werling, K., Shin, S., Delp, S., Pujades, S., Liu, C. K., Black, M. J.

ACM Transaction on Graphics (ToG), 42(6):253:1-253:15, December 2023 (article)

Abstract
Great progress has been made in estimating 3D human pose and shape from images and video by training neural networks to directly regress the parameters of parametric human models like SMPL. However, existing body models have simplified kinematic structures that do not correspond to the true joint locations and articulations in the human skeletal system, limiting their potential use in biomechanics. On the other hand, methods for estimating biomechanically accurate skeletal motion typically rely on complex motion capture systems and expensive optimization methods. What is needed is a parametric 3D human model with a biomechanically accurate skeletal structure that can be easily posed. To that end, we develop SKEL, which re-rigs the SMPL body model with a biomechanics skeleton. To enable this, we need training data of skeletons inside SMPL meshes in diverse poses. We build such a dataset by optimizing biomechanically accurate skeletons inside SMPL meshes from AMASS sequences. We then learn a regressor from SMPL mesh vertices to the optimized joint locations and bone rotations. Finally, we re-parametrize the SMPL mesh with the new kinematic parameters. The resulting SKEL model is animatable like SMPL but with fewer, and biomechanically-realistic, degrees of freedom. We show that SKEL has more biomechanically accurate joint locations than SMPL, and the bones fit inside the body surface better than previous methods. By fitting SKEL to SMPL meshes we are able to “upgrade" existing human pose and shape datasets to include biomechanical parameters. SKEL provides a new tool to enable biomechanics in the wild, while also providing vision and graphics researchers with a better constrained

ps

Project Page Paper DOI [BibTex]

Project Page Paper DOI [BibTex]


no image
Towards Semi-Automated Pleural Cavity Access for Pneumothorax in Austere Environments

L’Orsa, R., Lama, S., Westwick, D., Sutherland, G., Kuchenbecker, K. J.

Acta Astronautica, 212, pages: 48-53, November 2023 (article)

Abstract
Astronauts are at risk for pneumothorax, a condition where injury or disease introduces air between the chest wall and the lungs (i.e., the pleural cavity). In a worst-case scenario, it can rapidly lead to a fatality if left unmanaged and will require prompt treatment in situ if developed during spaceflight. Chest tube insertion is the definitive treatment for pneumothorax, but it requires a high level of skill and frequent practice for safe use. Physician astronauts may struggle to maintain this skill on medium- and long-duration exploration-class missions, and it is inappropriate for pure just-in-time learning or skill refreshment paradigms. This paper proposes semi-automating tool insertion to reduce the risk of complications in austere environments and describes preliminary experiments providing initial validation of an intelligent prototype system. Specifically, we showcase and analyse motion and force recordings from a sensorized percutaneous access needle inserted repeatedly into an ex vivo tissue phantom, along with relevant physiological data simultaneously recorded from the operator. When coupled with minimal just-in-time training and/or augmented reality guidance, the proposed system may enable non-expert operators to safely perform emergency chest tube insertion without the use of ground resources.

hi

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Electrochemically Controlled Hydrogels with Electrotunable Permeability and Uniaxial Actuation
Electrochemically Controlled Hydrogels with Electrotunable Permeability and Uniaxial Actuation

Benselfelt, T., Shakya, J., Rothemund, P., Lindström, S. B., Piper, A., Winkler, T. E., Hajian, A., Wågberg, L., Keplinger, C., Hamedi, M. M.

Advanced Materials, 35(45):2303255, Wiley-VCH GmbH, November 2023 (article)

Abstract
The unique properties of hydrogels enable the design of life-like soft intelligent systems. However, stimuli-responsive hydrogels still suffer from limited actuation control. Direct electronic control of electronically conductive hydrogels can solve this challenge and allow direct integration with modern electronic systems. An electrochemically controlled nanowire composite hydrogel with high in-plane conductivity that stimulates a uniaxial electrochemical osmotic expansion is demonstrated. This materials system allows precisely controlled shape-morphing at only −1 V, where capacitive charging of the hydrogel bulk leads to a large uniaxial expansion of up to 300%, caused by the ingress of ≈700 water molecules per electron–ion pair. The material retains its state when turned off, which is ideal for electrotunable membranes as the inherent coupling between the expansion and mesoporosity enables electronic control of permeability for adaptive separation, fractionation, and distribution. Used as electrochemical osmotic hydrogel actuators, they achieve an electroactive pressure of up to 0.7 MPa (1.4 MPa vs dry) and a work density of ≈150 kJ m−3 (2 MJ m−3 vs dry). This new materials system paves the way to integrate actuation, sensing, and controlled permeation into advanced soft intelligent systems.

rm

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
A taxonomy and review of generalization research in NLP

Hupkes, D., Giulianelli, M., Dankers, V., Artetxe, M., Elazar, Y., Pimentel, T., Christodoulopoulos, C., Lasri, K., Saphra, N., Sinclair, A., Ulmer, D., Schottmann, F., Batsuren, K., Sun, K., Sinha, K., Khalatbari, L., Ryskina, M., Frieske, R., Cotterell, R., Jin, Z.

Nature Machine Intelligence, 5(10):1161-1174, October 2023 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Multimodal Multi-User Surface Recognition with the Kernel Two-Sample Test

Khojasteh, B., Solowjow, F., Trimpe, S., Kuchenbecker, K. J.

IEEE Transactions on Automation Science and Engineering, pages: 1-16, August 2023 (article)

Abstract
Machine learning and deep learning have been used extensively to classify physical surfaces through images and time-series contact data. However, these methods rely on human expertise and entail the time-consuming processes of data and parameter tuning. To overcome these challenges, we propose an easily implemented framework that can directly handle heterogeneous data sources for classification tasks. Our data-versus-data approach automatically quantifies distinctive differences in distributions in a high-dimensional space via kernel two-sample testing between two sets extracted from multimodal data (e.g., images, sounds, haptic signals). We demonstrate the effectiveness of our technique by benchmarking against expertly engineered classifiers for visual-audio-haptic surface recognition due to the industrial relevance, difficulty, and competitive baselines of this application; ablation studies confirm the utility of key components of our pipeline. As shown in our open-source code, we achieve 97.2% accuracy on a standard multi-user dataset with 108 surface classes, outperforming the state-of-the-art machine-learning algorithm by 6% on a more difficult version of the task. The fact that our classifier obtains this performance with minimal data processing in the standard algorithm setting reinforces the powerful nature of kernel methods for learning to recognize complex patterns. Note to Practitioners—We demonstrate how to apply the kernel two-sample test to a surface-recognition task, discuss opportunities for improvement, and explain how to use this framework for other classification problems with similar properties. Automating surface recognition could benefit both surface inspection and robot manipulation. Our algorithm quantifies class similarity and therefore outputs an ordered list of similar surfaces. This technique is well suited for quality assurance and documentation of newly received materials or newly manufactured parts. More generally, our automated classification pipeline can handle heterogeneous data sources including images and high-frequency time-series measurements of vibrations, forces and other physical signals. As our approach circumvents the time-consuming process of feature engineering, both experts and non-experts can use it to achieve high-accuracy classification. It is particularly appealing for new problems without existing models and heuristics. In addition to strong theoretical properties, the algorithm is straightforward to use in practice since it requires only kernel evaluations. Its transparent architecture can provide fast insights into the given use case under different sensing combinations without costly optimization. Practitioners can also use our procedure to obtain the minimum data-acquisition time for independent time-series data from new sensor recordings.

hi

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Getting personal with epigenetics: towards individual-specific epigenomic imputation with machine learning

Hawkins-Hooker, A., Visonà, G., Narendra, T., Rojas-Carulla, M., Schölkopf, B., Schweikert, G.

Nature Communications, 14(1), August 2023 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
A model for efficient dynamical ranking in networks

Della Vecchia, A., Neocosmos, K., Larremore, D. B., Moore, C., De Bacco, C.

August 2023 (article) Submitted

pio

Preprint Code link (url) [BibTex]

Preprint Code link (url) [BibTex]


Magnetically assisted soft milli-tools for occluded lumen morphology detection
Magnetically assisted soft milli-tools for occluded lumen morphology detection

Yan, Y., Wang, T., Zhang, R., Liu, Y., Hu, W., Sitti, M.

Science Advances, 9(33):eadi3979, August 2023 (article)

pi

DOI [BibTex]


Minsight: A Fingertip-Sized Vision-Based Tactile Sensor for Robotic Manipulation
Minsight: A Fingertip-Sized Vision-Based Tactile Sensor for Robotic Manipulation

Andrussow, I., Sun, H., Kuchenbecker, K. J., Martius, G.

Advanced Intelligent Systems, 5(8):2300042, August 2023, Inside back cover (article)

Abstract
Intelligent interaction with the physical world requires perceptual abilities beyond vision and hearing; vibrant tactile sensing is essential for autonomous robots to dexterously manipulate unfamiliar objects or safely contact humans. Therefore, robotic manipulators need high-resolution touch sensors that are compact, robust, inexpensive, and efficient. The soft vision-based haptic sensor presented herein is a miniaturized and optimized version of the previously published sensor Insight. Minsight has the size and shape of a human fingertip and uses machine learning methods to output high-resolution maps of 3D contact force vectors at 60 Hz. Experiments confirm its excellent sensing performance, with a mean absolute force error of 0.07 N and contact location error of 0.6 mm across its surface area. Minsight's utility is shown in two robotic tasks on a 3-DoF manipulator. First, closed-loop force control enables the robot to track the movements of a human finger based only on tactile data. Second, the informative value of the sensor output is shown by detecting whether a hard lump is embedded within a soft elastomer with an accuracy of 98%. These findings indicate that Minsight can give robots the detailed fingertip touch sensing needed for dexterous manipulation and physical human–robot interaction.

al hi

DOI Project Page [BibTex]


Learning to Estimate Palpation Forces in Robotic Surgery From Visual-Inertial Data
Learning to Estimate Palpation Forces in Robotic Surgery From Visual-Inertial Data

Lee, Y., Husin, H. M., Forte, M., Lee, S., Kuchenbecker, K. J.

IEEE Transactions on Medical Robotics and Bionics, 5(3):496-506, August 2023 (article)

Abstract
Surgeons cannot directly touch the patient's tissue in robot-assisted minimally invasive procedures. Instead, they must palpate using instruments inserted into the body through trocars. This way of operating largely prevents surgeons from using haptic cues to localize visually undetectable structures such as tumors and blood vessels, motivating research on direct and indirect force sensing. We propose an indirect force-sensing method that combines monocular images of the operating field with measurements from IMUs attached externally to the instrument shafts. Our method is thus suitable for various robotic surgery systems as well as laparoscopic surgery. We collected a new dataset using a da Vinci Si robot, a force sensor, and four different phantom tissue samples. The dataset includes 230 one-minute-long recordings of repeated bimanual palpation tasks performed by four lay operators. We evaluated several network architectures and investigated the role of the network inputs. Using the DenseNet vision model and including inertial data best-predicted palpation forces (lowest average root-mean-square error and highest average coefficient of determination). Ablation studies revealed that video frames carry significantly more information than inertial signals. Finally, we demonstrated the model's ability to generalize to unseen tissue and predict shear contact forces.

hi

DOI [BibTex]

DOI [BibTex]


{BARC}: Breed-Augmented Regression Using Classification for {3D} Dog Reconstruction from Images
BARC: Breed-Augmented Regression Using Classification for 3D Dog Reconstruction from Images

Rueegg, N., Zuffi, S., Schindler, K., Black, M. J.

Int. J. of Comp. Vis. (IJCV), 131(8):1964–1979, August 2023 (article)

Abstract
The goal of this work is to reconstruct 3D dogs from monocular images. We take a model-based approach, where we estimate the shape and pose parameters of a 3D articulated shape model for dogs. We consider dogs as they constitute a challenging problem, given they are highly articulated and come in a variety of shapes and appearances. Recent work has considered a similar task using the multi-animal SMAL model, with additional limb scale parameters, obtaining reconstructions that are limited in terms of realism. Like previous work, we observe that the original SMAL model is not expressive enough to represent dogs of many different breeds. Moreover, we make the hypothesis that the supervision signal used to train the network, that is 2D keypoints and silhouettes, is not sufficient to learn a regressor that can distinguish between the large variety of dog breeds. We therefore go beyond previous work in two important ways. First, we modify the SMAL shape space to be more appropriate for representing dog shape. Second, we formulate novel losses that exploit information about dog breeds. In particular, we exploit the fact that dogs of the same breed have similar body shapes. We formulate a novel breed similarity loss, consisting of two parts: One term is a triplet loss, that encourages the shape of dogs from the same breed to be more similar than dogs of different breeds. The second one is a breed classification loss. With our approach we obtain 3D dogs that, compared to previous work, are quantitatively better in terms of 2D reconstruction, and significantly better according to subjective and quantitative 3D evaluations. Our work shows that a-priori side information about similarity of shape and appearance, as provided by breed labels, can help to compensate for the lack of 3D training data. This concept may be applicable to other animal species or groups of species. We call our method BARC (Breed-Augmented Regression using Classification). Our code is publicly available for research purposes at https://barc.is.tue.mpg.de/.

ps

On-line pdf DOI [BibTex]

On-line pdf DOI [BibTex]


no image
Catastrophic overfitting can be induced with discriminative non-robust features

Ortiz-Jimenez*, G., de Jorge*, P., Sanyal, A., Bibi, A., Dokania, P. K., Frossard, P., Rogez, G., Torr, P.

Transactions on Machine Learning Research , July 2023, *equal contribution (article)

ei

PDF Code link (url) [BibTex]

PDF Code link (url) [BibTex]


A Multifunctional Soft Robotic Shape Display with High-speed Actuation, Sensing, and Control
A Multifunctional Soft Robotic Shape Display with High-speed Actuation, Sensing, and Control

Johnson, B. K., Naris, M., Sundaram, V., Volchko, A., Ly, K., Mitchell, S. K., Acome, E., Kellaris, N., Keplinger, C., Correll, N., Humbert, J. S., Rentschler, M. E.

Nature Communications, 14(1), July 2023 (article)

Abstract
Shape displays which actively manipulate surface geometry are an expanding robotics domain with applications to haptics, manufacturing, aerodynamics, and more. However, existing displays often lack high-fidelity shape morphing, high-speed deformation, and embedded state sensing, limiting their potential uses. Here, we demonstrate a multifunctional soft shape display driven by a 10 × 10 array of scalable cellular units which combine high-speed electrohydraulic soft actuation, magnetic-based sensing, and control circuitry. We report high-performance reversible shape morphing up to 50 Hz, sensing of surface deformations with 0.1 mm sensitivity and external forces with 50 mN sensitivity in each cell, which we demonstrate across a multitude of applications including user interaction, image display, sensing of object mass, and dynamic manipulation of solids and liquids. This work showcases the rich multifunctionality and high-performance capabilities that arise from tightly-integrating large numbers of electrohydraulic actuators, soft sensors, and controllers at a previously undemonstrated scale in soft robotics.

rm

YouTube video link (url) DOI [BibTex]

YouTube video link (url) DOI [BibTex]


Liquid Metal Actuators: A Comparative Analysis of Surface Tension Controlled Actuation
Liquid Metal Actuators: A Comparative Analysis of Surface Tension Controlled Actuation

Liao, J., Majidi, C., Sitti, M.

Advanced Materials (Deerfield Beach, Fla.), pages: e2300560-e2300560, June 2023 (article)

pi

DOI [BibTex]

DOI [BibTex]


no image
Community Detection in Large Hypergraphs

Ruggeri, N., Contisciani, M., Battiston, F., De Bacco, C.

Science Advances, 9, eadg9159, June 2023 (article)

pio

Preprint Code Published version DOI [BibTex]


Programmable self-organization of heterogeneous microrobot collectives
Programmable self-organization of heterogeneous microrobot collectives

Ceron, S., Gardi, G., Petersen, K., Sitti, M.

Proceedings of the National Academy of Sciences, 120(24):e2221913120, June 2023 (article)

pi

DOI [BibTex]

DOI [BibTex]


no image
Quantification of intratumoural heterogeneity in mice and patients via machine-learning models trained on PET–MRI data

Katiyar, P., Schwenck, J., Frauenfeld, L., Divine, M. R., Agrawal, V., Kohlhofer, U., Gatidis, S., Kontermann, R., Königsrainer, A., Quintanilla-Martinez, L., la Fougère, C., Schölkopf, B., Pichler, B. J., Disselhorst, J. A.

Nature Biomedical Engineering, 7(8):1014-1027, June 2023 (article)

ei

DOI [BibTex]

DOI [BibTex]


Generating Clear Vibrotactile Cues with a Magnet Embedded in a Soft Finger Sheath
Generating Clear Vibrotactile Cues with a Magnet Embedded in a Soft Finger Sheath

Gertler, I., Serhat, G., Kuchenbecker, K. J.

Soft Robotics, 10(3):624-635, June 2023 (article)

Abstract
Haptic displays act on the user's body to stimulate the sense of touch and enrich applications from gaming and computer-aided design to rehabilitation and remote surgery. However, when crafted from typical rigid robotic components, they tend to be heavy, bulky, and expensive, while sleeker designs often struggle to create clear haptic cues. This article introduces a lightweight wearable silicone finger sheath that can deliver salient and rich vibrotactile cues using electromagnetic actuation. We fabricate the sheath on a ferromagnetic mandrel with a process based on dip molding, a robust fabrication method that is rarely used in soft robotics but is suitable for commercial production. A miniature rare-earth magnet embedded within the silicone layers at the center of the finger pad is driven to vibrate by the application of alternating current to a nearby air-coil. Experiments are conducted to determine the amplitude of the magnetic force and the frequency response function for the displacement amplitude of the magnet perpendicular to the skin. In addition, high-fidelity finite element analyses of the finger wearing the device are performed to investigate the trends observed in the measurements. The experimental and simulated results show consistent dynamic behavior from 10 to 1000 Hz, with the displacement decreasing after about 300 Hz. These results match the detection threshold profile obtained in a psychophysical study performed by 17 users, where more current was needed only at the highest frequency. A cue identification experiment and a demonstration in virtual reality validate the feasibility of this approach to fingertip haptics.

hi

DOI Project Page [BibTex]


Virtual Reality Exposure to a Healthy Weight Body Is a Promising Adjunct Treatment for Anorexia Nervosa
Virtual Reality Exposure to a Healthy Weight Body Is a Promising Adjunct Treatment for Anorexia Nervosa

Behrens, S. C., Tesch, J., Sun, P. J., Starke, S., Black, M. J., Schneider, H., Pruccoli, J., Zipfel, S., Giel, K. E.

Psychotherapy and Psychosomatics, 92(3):170-179, June 2023 (article)

Abstract
Introduction/Objective: Treatment results of anorexia nervosa (AN) are modest, with fear of weight gain being a strong predictor of treatment outcome and relapse. Here, we present a virtual reality (VR) setup for exposure to healthy weight and evaluate its potential as an adjunct treatment for AN. Methods: In two studies, we investigate VR experience and clinical effects of VR exposure to higher weight in 20 women with high weight concern or shape concern and in 20 women with AN. Results: In study 1, 90% of participants (18/20) reported symptoms of high arousal but verbalized low to medium levels of fear. Study 2 demonstrated that VR exposure to healthy weight induced high arousal in patients with AN and yielded a trend that four sessions of exposure improved fear of weight gain. Explorative analyses revealed three clusters of individual reactions to exposure, which need further exploration. Conclusions: VR exposure is a well-accepted and powerful tool for evoking fear of weight gain in patients with AN. We observed a statistical trend that repeated virtual exposure to healthy weight improved fear of weight gain with large effect sizes. Further studies are needed to determine the mechanisms and differential effects.

ps

on-line DOI [BibTex]

on-line DOI [BibTex]


In the Arms of a Robot: Designing Autonomous Hugging Robots with Intra-Hug Gestures
In the Arms of a Robot: Designing Autonomous Hugging Robots with Intra-Hug Gestures

Block, A. E., Seifi, H., Hilliges, O., Gassert, R., Kuchenbecker, K. J.

ACM Transactions on Human-Robot Interaction, 12(2):1-49, June 2023, Special Issue on Designing the Robot Body: Critical Perspectives on Affective Embodied Interaction (article)

Abstract
Hugs are complex affective interactions that often include gestures like squeezes. We present six new guidelines for designing interactive hugging robots, which we validate through two studies with our custom robot. To achieve autonomy, we investigated robot responses to four human intra-hug gestures: holding, rubbing, patting, and squeezing. Thirty-two users each exchanged and rated sixteen hugs with an experimenter-controlled HuggieBot 2.0. The robot's inflated torso's microphone and pressure sensor collected data of the subjects' demonstrations that were used to develop a perceptual algorithm that classifies user actions with 88% accuracy. Users enjoyed robot squeezes, regardless of their performed action, they valued variety in the robot response, and they appreciated robot-initiated intra-hug gestures. From average user ratings, we created a probabilistic behavior algorithm that chooses robot responses in real time. We implemented improvements to the robot platform to create HuggieBot 3.0 and then validated its gesture perception system and behavior algorithm with sixteen users. The robot's responses and proactive gestures were greatly enjoyed. Users found the robot more natural, enjoyable, and intelligent in the last phase of the experiment than in the first. After the study, they felt more understood by the robot and thought robots were nicer to hug.

hi

DOI Project Page [BibTex]

DOI Project Page [BibTex]


The mismatch between experimental and computational fluid dynamics analyses for magnetic surface microrollers
The mismatch between experimental and computational fluid dynamics analyses for magnetic surface microrollers

Bozuyuk, U., Ozturk, H., Sitti, M.

Scientific Reports, 13(1):10196, June 2023 (article)

pi

DOI [BibTex]


no image
Hypergraphx: a library for higher-order network analysis

Lotito, Q. F., Contisciani, M., De Bacco, C., Di Gaetano, L., Gallo, L., Montresor, A., Musciotto, F., Ruggeri, N., Battiston, F.

Journal of Complex Networks, 11, May 2023 (article)

pio

Preprint Code DOI [BibTex]

Preprint Code DOI [BibTex]


no image
Better Together: Data Harmonization and Cross-StudAnalysis of Abdominal MRI Data From UK Biobank and the German National Cohort

Gatidis, S., Kart, T., Fischer, M., Winzeck, S., Glocker, B., Bai, W., Bülow, R., Emmel, C., Friedrich, L., Kauczor, H., Keil, T., Kröncke, T., Mayer, P., Niendorf, T., Peters, A., Pischon, T., Schaarschmidt, B., Schmidt, B., Schulze, M., Umutle, L., Völzke, H., Küstner, T., Bamberg, F., Schölkopf, B., Rueckert, D.

Investigative Radiology, 58(5):346-354, May 2023 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
ResMiCo: Increasing the quality of metagenome-assembled genomes with deep learning

Mineeva*, O., Danciu*, D., Schölkopf, B., Ley, R. E., Rätsch, G., Youngblut, N. D.

PLOS Computational Biology, 19(5), Public Library of Science, May 2023, *equal contribution (article)

ei

DOI [BibTex]

DOI [BibTex]


Virtual pivot point in human walking: always experimentally observed but simulations suggest it may not be necessary for stability
Virtual pivot point in human walking: always experimentally observed but simulations suggest it may not be necessary for stability

Schreff, L., Haeufle, D. F. B., Badri-Spröwitz, A., Vielemeyer, J., Müller, R.

Journal of Biomechanics, 153, May 2023 (article)

Abstract
The intersection of ground reaction forces near a point above the center of mass has been observed in computer simulation models and human walking experiments. Observed so ubiquitously, the intersection point (IP) is commonly assumed to provide postural stability for bipedal walking. In this study, we challenge this assumption by questioning if walking without an IP is possible. Deriving gaits with a neuromuscular reflex model through multi-stage optimization, we found stable walking patterns that show no signs of the IP-typical intersection of ground reaction forces. The non-IP gaits found are stable and successfully rejected step-down perturbations, which indicates that an IP is not necessary for locomotion robustness or postural stability. A collision-based analysis shows that non-IP gaits feature center of mass (CoM) dynamics with vectors of the CoM velocity and ground reaction force increasingly opposing each other, indicating an increased mechanical cost of transport. Although our computer simulation results have yet to be confirmed through experimental studies, they already indicate that the role of the IP in postural stability should be further investigated. Moreover, our observations on the CoM dynamics and gait efficiency suggest that the IP may have an alternative or additional function that should be considered.

dlg

arXiv link (url) DOI [BibTex]

arXiv link (url) DOI [BibTex]


{Fast-SNARF}: A Fast Deformer for Articulated Neural Fields
Fast-SNARF: A Fast Deformer for Articulated Neural Fields

Chen, X., Jiang, T., Song, J., Rietmann, M., Geiger, A., Black, M. J., Hilliges, O.

IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), pages: 1-15, April 2023 (article)

Abstract
Neural fields have revolutionized the area of 3D reconstruction and novel view synthesis of rigid scenes. A key challenge in making such methods applicable to articulated objects, such as the human body, is to model the deformation of 3D locations between the rest pose (a canonical space) and the deformed space. We propose a new articulation module for neural fields, Fast-SNARF, which finds accurate correspondences between canonical space and posed space via iterative root finding. Fast-SNARF is a drop-in replacement in functionality to our previous work, SNARF, while significantly improving its computational efficiency. We contribute several algorithmic and implementation improvements over SNARF, yielding a speed-up of 150× . These improvements include voxel-based correspondence search, pre-computing the linear blend skinning function, and an efficient software implementation with CUDA kernels. Fast-SNARF enables efficient and simultaneous optimization of shape and skinning weights given deformed observations without correspondences (e.g. 3D meshes). Because learning of deformation maps is a crucial component in many 3D human avatar methods and since Fast-SNARF provides a computationally efficient solution, we believe that this work represents a significant step towards the practical creation of 3D virtual humans.

ps

pdf publisher site code DOI [BibTex]

pdf publisher site code DOI [BibTex]


no image
Uncovering the Organization of Neural Circuits with Generalized Phase Locking Analysis

Safavi, S., Panagiotaropoulos, T. I., Kapoor, V., Ramirez-Villegas, J. F., Logothetis, N., Besserve, M.

PLOS Computational Biology, 19(4):1-45, Public Library of Science, April 2023 (article)

ei

bioRxiv DOI Project Page [BibTex]

bioRxiv DOI Project Page [BibTex]


no image
The ENCODE Imputation Challenge: a critical assessment of methods for cross-cell type imputation of epigenomic profiles

Schreiber*, J., Boix*, C., Lee, J. W., Li, H., Guan, Y., Chang, C., Chang, J., Hawkins-Hooker, A., Schölkopf, B., Schweikert, G., Carulla, M. R., Canakoglu, A., Guzzo, F., Nanni, L., Masseroli, M., Carman, M. J., Pinoli, P., Hong, C., Yip, K. Y., Spence, J. P., Batra, S. S., Song, Y. S., Mahony, S., Zhang, Z., Tan, W., Shen, Y., Sun, Y., Shi, M., Adrian, J., Sandstrom, R., Farrell, N., Halow, J., Lee, K., Jiang, L., Yang, X., Epstein, C., Strattan, J. S., Bernstein, B., Snyder, M., Kellis, M., Stafford, W., Kundaje, A., ENCODE Imputation Challenge Participants,

Genome Biology, 24, April 2023, *co‑first authors (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Adapting to noise distribution shifts in flow-based gravitational-wave inference

Wildberger, J., Dax, M., Green, S. R., Gair, J., Pürrer, M., Macke, J. H., Buonanno, A., Schölkopf, B.

Physical Review D, 107(8), April 2023 (article)

ei

DOI [BibTex]

DOI [BibTex]