Header logo is


2017


no image
Common Envelope Light Curves. I. Grid-code Module Calibration

Galaviz, P., De Marco, O., Passy, J., Staff, J., Iaconi, R.

Astrophysical Journal, Supplement, 229, pages: 36, 2017 (article)

DOI [BibTex]

2017


DOI [BibTex]


no image
Behind Distribution Shift: Mining Driving Forces of Changes and Causal Arrows

Huang, B., Zhang, K., Zhang, J., Sanchez-Romero, R., Glymour, C., Schölkopf, B.

IEEE 17th International Conference on Data Mining (ICDM 2017), pages: 913-918, (Editors: Vijay Raghavan,Srinivas Aluru, George Karypis, Lucio Miele and Xindong Wu), 2017 (conference)

ei

DOI [BibTex]

DOI [BibTex]


Thumb xl image toc
Multi-fractal characterization of bacterial swimming dynamics: a case study on real and simulated Serratia marcescens

Koorehdavoudi, H., Bogdan, P., Wei, G., Marculescu, R., Zhuang, J., Carlsen, R. W., Sitti, M.

Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 473(2203), The Royal Society, 2017 (article)

Abstract
To add to the current state of knowledge about bacterial swimming dynamics, in this paper, we study the fractal swimming dynamics of populations of Serratia marcescens bacteria both in vitro and in silico, while accounting for realistic conditions like volume exclusion, chemical interactions, obstacles and distribution of chemoattractant in the environment. While previous research has shown that bacterial motion is non-ergodic, we demonstrate that, besides the non-ergodicity, the bacterial swimming dynamics is multi-fractal in nature. Finally, we demonstrate that the multi-fractal characteristic of bacterial dynamics is strongly affected by bacterial density and chemoattractant concentration.

pi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Automatic detection of motion artifacts in MR images using CNNS

Meding, K., Loktyushin, A., Hirsch, M.

IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2017), pages: 811-815, 2017 (conference)

ei

DOI [BibTex]

DOI [BibTex]


Thumb xl gcpr2017 nugget
Learning to Filter Object Detections

Prokudin, S., Kappler, D., Nowozin, S., Gehler, P.

In Pattern Recognition: 39th German Conference, GCPR 2017, Basel, Switzerland, September 12–15, 2017, Proceedings, pages: 52-62, Springer International Publishing, Cham, 2017 (inbook)

Abstract
Most object detection systems consist of three stages. First, a set of individual hypotheses for object locations is generated using a proposal generating algorithm. Second, a classifier scores every generated hypothesis independently to obtain a multi-class prediction. Finally, all scored hypotheses are filtered via a non-differentiable and decoupled non-maximum suppression (NMS) post-processing step. In this paper, we propose a filtering network (FNet), a method which replaces NMS with a differentiable neural network that allows joint reasoning and re-scoring of the generated set of hypotheses per image. This formulation enables end-to-end training of the full object detection pipeline. First, we demonstrate that FNet, a feed-forward network architecture, is able to mimic NMS decisions, despite the sequential nature of NMS. We further analyze NMS failures and propose a loss formulation that is better aligned with the mean average precision (mAP) evaluation metric. We evaluate FNet on several standard detection datasets. Results surpass standard NMS on highly occluded settings of a synthetic overlapping MNIST dataset and show competitive behavior on PascalVOC2007 and KITTI detection benchmarks.

ps

Paper link (url) DOI [BibTex]

Paper link (url) DOI [BibTex]


Thumb xl comp 5d wbkgng copy
Microemulsion-Based Soft Bacteria-Driven Microswimmers for Active Cargo Delivery

Singh, A. V., Hosseinidoust, Z., Park, B., Yasa, O., Sitti, M.

ACS Nano, 0(0):null, 2017, PMID: 28858477 (article)

Abstract
Biohybrid cell-driven microsystems offer unparalleled possibilities for realization of soft microrobots at the micron scale. Here, we introduce a bacteria-driven microswimmer that combines the active locomotion and sensing capabilities of bacteria with the desirable encapsulation and viscoelastic properties of a soft double-micelle microemulsion for active transport and delivery of cargo (e.g., imaging agents, genes, and drugs) to living cells. Quasi-monodisperse double emulsions were synthesized with an aqueous core that encapsulated the fluorescence imaging agents, as a proof-of-concept cargo in this study, and an outer oil shell that was functionalized with streptavidin for specific and stable attachment of biotin-conjugated Escherichia coli. Motile bacteria effectively propelled the soft microswimmers across a Transwell membrane, actively delivering imaging agents (i.e., dyes) encapsulated inside of the micelles to a monolayer of cultured MCF7 breast cancer and J744A.1 macrophage cells, which enabled real-time, live-cell imaging of cell organelles, namely mitochondria, endoplasmic reticulum, and Golgi body. This in vitro model demonstrates the proof-of-concept feasibility of the proposed soft microswimmers and offers promise for potential biomedical applications in active and/or targeted transport and delivery of imaging agents, drugs, stem cells, siRNA, and therapeutic genes to live tissue in in vitro disease models (e.g., organ-on-a-chip devices) and stagnant or low-flow-velocity fluidic regions of the human body.

pi

link (url) DOI Project Page Project Page [BibTex]


no image
Biomechanics and Locomotion Control in Legged Animals and Legged Robots

Sproewitz, A., Heim, S.

2017 (mpi_year_book)

Abstract
An animal's running gait is dynamic, efficient, elegant, and adaptive. We see locomotion in animals as an orchestrated interplay of the locomotion apparatus, interacting with its environment. The Dynamic Locomotion Group at the Max Planck Institute for Intelligent Systems in Stuttgart develops novel legged robots to decipher aspects of biomechanics and neuromuscular control of legged locomotion in animals, and to understand general principles of locomotion.

link (url) DOI [BibTex]


Thumb xl screen shot 2018 02 08 at 12.58.55 pm
Linking Mechanics and Learning

Heim, S., Grimminger, F., Özge, D., Spröwitz, A.

In Proceedings of Dynamic Walking 2017, 2017 (inproceedings)

dlg

[BibTex]

[BibTex]


no image
Mode Evolution in Strongly Coupled Plasmonic Dolmens Fabricated by Templated Assembly

Flauraud, V., Bernasconi, G. D., Butet, J., Mastrangeli, M., Alexander, D. T. L., Martin, O. J. F., Brugger, J.

ACS Photonics, 4(7):1661-1668, 2017 (article)

Abstract
Plasmonic antennas have enabled a wealth of applications that exploit tailored near-fields and radiative properties, further endowed by the bespoke interactions of multiple resonant building blocks. Specifically, when the interparticle distances are reduced to a few nanometers, coupling may be greatly enhanced leading to ultimate near-field intensities and confinement along with a large energy splitting of resonant modes. While this concept is well-known, the fabrication and characterization of suitable multimers with controlled geometries and few-nanometer gaps remains highly challenging. In this article, we present the topographically templated assembly of single-crystal colloidal gold nanorods into trimers, with a dolmen geometry. This fabrication method enables the precise positioning of high-quality nanorods, with gaps as small as 1.5 nm, which permits a gradual and controlled symmetry breaking by tuning the arrangement of these strongly coupled nanostructures. To characterize the fabricated structures, we perform electron energy loss spectroscopy (EELS) near-field hyperspectral imaging and geometrically accurate EELS, plane wave, and eigenmode full-wave computations to reveal the principles governing the electromagnetic response of such nanostructures that have been extensively studied under plane wave excitation for their Fano resonant properties. These experiments track the evolution of the multipolar interactions with high accuracy as the antenna geometry varies. Our results provide new insights in strongly coupled single-crystal building blocks and open news opportunities for the design and fabrication of plasmonic systems.

pi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl screen shot 2018 05 04 at 11.45.26
Roughness perception of virtual textures displayed by electrovibration on touch screens

Vardar, Y., Isleyen, A., Saleem, M. K., Basdogan, C.

In 2017 IEEE World Haptics Conference (WHC), pages: 263-268, 2017 (inproceedings)

Abstract
In this study, we have investigated the human roughness perception of periodical textures on an electrostatic display by conducting psychophysical experiments with 10 subjects. To generate virtual textures, we used low frequency unipolar pulse waves in different waveform (sinusoidal, square, saw-tooth, triangle), and spacing. We modulated these waves with a 3kHz high frequency sinusoidal carrier signal to minimize perceptional differences due to the electrical filtering of human finger and eliminate low-frequency distortions. The subjects were asked to rate 40 different macro textures on a Likert scale of 1-7. We also collected the normal and tangential forces acting on the fingers of subjects during the experiment. The results of our user study showed that subjects perceived the square wave as the roughest while they perceived the other waveforms equally rough. The perceived roughness followed an inverted U-shaped curve as a function of groove width, but the peak point shifted to the left compared to the results of the earlier studies. Moreover, we found that the roughness perception of subjects is best correlated with the rate of change of the contact forces rather than themselves.

hi

DOI [BibTex]

DOI [BibTex]


Thumb xl screen shot 2018 05 04 at 11.42.00
Reproduction of textures based on electrovibration

Fiedler, T., Vardar, Y., Strese, M., Steinbach, E., Basdogan, C.

Demo in IEEE World Haptics, 2017 (misc)

Abstract
This demonstration presents an approach to represent textures based on electovibration. We collect acceleration data which occurs while sliding a tool tip over a real texture surface. The prerecorded data was collected by a ADXL335 accelerometer, which is mounted on a FALCON device moving on the x-axis with a regulated velocity. In order to replicate the same acceleration with electrovibration, we found two problems. The frequency of one sine wave shifts to the double frequency. This effect originates from the electrostatic force between the finger pad and the tactile display as proposed by Kactmarek et Al. [1]. Taking the square root of the input signal corrects the effect. This was also earlier proposed by [1, 2, 3] However, if not only one but multiple sine waves are displayed interference occur and acceleration signals from real textures may not feel perceptually realistic. We propose to display only the dominant frequencies from a real texture signal. Peak frequencies are determined within the respect of the JND of 11 percent found by earlier literature. A new sine wave signal with the dominant frequencies is created. In the demo, we will let the attendees feel the differences between prerecorded and artificially created textures.

hi

[BibTex]

[BibTex]


no image
Discriminative k-shot learning using probabilistic models

Bauer*, M., Rojas-Carulla*, M., Świątkowski, J. B., Schölkopf, B., Turner, R. E.

Second Workshop on Bayesian Deep Learning at the 31st Conference on Neural Information Processing Systems (NIPS) , 2017, *equal contribution (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Unsupervised clustering of EOG as a viable substitute for optical eye-tracking

Flad, N., Fomina, T., Bülthoff, H. H., Chuang, L. L.

In First Workshop on Eye Tracking and Visualization (ETVIS 2015), pages: 151-167, Mathematics and Visualization, (Editors: Burch, M., Chuang, L., Fisher, B., Schmidt, A., and Weiskopf, D.), Springer, 2017 (inbook)

ei

DOI [BibTex]

DOI [BibTex]


no image
BundleMAP: Anatomically Localized Classification, Regression, and Hypothesis Testing in Diffusion MRI

Khatami, M., Schmidt-Wilcke, T., Sundgren, P. C., Abbasloo, A., Schölkopf, B., Schultz, T.

Pattern Recognition, 63, pages: 593-600, 2017 (article)

ei

DOI [BibTex]

DOI [BibTex]


Thumb xl web image
Data-Driven Physics for Human Soft Tissue Animation

Kim, M., Pons-Moll, G., Pujades, S., Bang, S., Kim, J., Black, M., Lee, S.

ACM Transactions on Graphics, (Proc. SIGGRAPH), 36(4), 2017 (article)

Abstract
Data driven models of human poses and soft-tissue deformations can produce very realistic results, but they only model the visible surface of the human body and cannot create skin deformation due to interactions with the environment. Physical simulations can generalize to external forces, but their parameters are difficult to control. In this paper, we present a layered volumetric human body model learned from data. Our model is composed of a data-driven inner layer and a physics-based external layer. The inner layer is driven with a volumetric statistical body model (VSMPL). The soft tissue layer consists of a tetrahedral mesh that is driven using the finite element method (FEM). Model parameters, namely the segmentation of the body into layers and the soft tissue elasticity, are learned directly from 4D registrations of humans exhibiting soft tissue deformations. The learned two layer model is a realistic full-body avatar that generalizes to novel motions and external forces. Experiments show that the resulting avatars produce realistic results on held out sequences and react to external forces. Moreover, the model supports the retargeting of physical properties from one avatar when they share the same topology.

ps

video paper link (url) [BibTex]

video paper link (url) [BibTex]


no image
Local Group Invariant Representations via Orbit Embeddings

Raj, A., Kumar, A., Mroueh, Y., Fletcher, T., Schölkopf, B.

Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS 2017), 54, pages: 1225-1235, Proceedings of Machine Learning Research, (Editors: Aarti Singh and Jerry Zhu), 2017 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Thumb xl phd thesis teaser
Learning Inference Models for Computer Vision

Jampani, V.

MPI for Intelligent Systems and University of Tübingen, 2017 (phdthesis)

Abstract
Computer vision can be understood as the ability to perform 'inference' on image data. Breakthroughs in computer vision technology are often marked by advances in inference techniques, as even the model design is often dictated by the complexity of inference in them. This thesis proposes learning based inference schemes and demonstrates applications in computer vision. We propose techniques for inference in both generative and discriminative computer vision models. Despite their intuitive appeal, the use of generative models in vision is hampered by the difficulty of posterior inference, which is often too complex or too slow to be practical. We propose techniques for improving inference in two widely used techniques: Markov Chain Monte Carlo (MCMC) sampling and message-passing inference. Our inference strategy is to learn separate discriminative models that assist Bayesian inference in a generative model. Experiments on a range of generative vision models show that the proposed techniques accelerate the inference process and/or converge to better solutions. A main complication in the design of discriminative models is the inclusion of prior knowledge in a principled way. For better inference in discriminative models, we propose techniques that modify the original model itself, as inference is simple evaluation of the model. We concentrate on convolutional neural network (CNN) models and propose a generalization of standard spatial convolutions, which are the basic building blocks of CNN architectures, to bilateral convolutions. First, we generalize the existing use of bilateral filters and then propose new neural network architectures with learnable bilateral filters, which we call `Bilateral Neural Networks'. We show how the bilateral filtering modules can be used for modifying existing CNN architectures for better image segmentation and propose a neural network approach for temporal information propagation in videos. Experiments demonstrate the potential of the proposed bilateral networks on a wide range of vision tasks and datasets. In summary, we propose learning based techniques for better inference in several computer vision models ranging from inverse graphics to freely parameterized neural networks. In generative vision models, our inference techniques alleviate some of the crucial hurdles in Bayesian posterior inference, paving new ways for the use of model based machine learning in vision. In discriminative CNN models, the proposed filter generalizations aid in the design of new neural network architectures that can handle sparse high-dimensional data as well as provide a way for incorporating prior knowledge into CNNs.

ps

pdf [BibTex]

pdf [BibTex]


Thumb xl recent toc
Recent Advances in Skin Penetration Enhancers for Transdermal Gene and Drug Delivery

Amjadia, M., Mostaghacia, B., Sittia, M.

Current Gene Therapy, 17, pages: 000-000, 2017 (article)

Abstract
There is a growing interest in transdermal delivery systems because of their noninvasive, targeted, and on-demand delivery of gene and drugs. However, efficient penetration of therapeutic compounds into the skin is still challenging largely due to the impermeability of the outermost layer of the skin, known as stratum corneum. Recently, there have been major research activities to enhance the skin penetration depth of pharmacological agents. This article reviews recent advances in the development of various strategies for skin penetration enhancement. We show that approaches such as ultrasound waves, laser, and microneedle patches have successfully been employed to physically disrupt the stratum corneum structure for enhanced transdermal delivery. Rather than physical approaches, several non-physical route have also been utilized for efficient transdermal delivery across the skin barrier. Finally, we discuss some clinical applications of transdermal delivery systems for gene and drug delivery. This paper shows that transdermal delivery devices can potentially function for diverse healthcare and medical applications while further investigations are still necessary for more efficient skin penetration of gene and drugs.

pi

DOI Project Page [BibTex]


Thumb xl a fully toc
A fully dense and globally consistent 3D map reconstruction approach for GI tract to enhance therapeutic relevance of the endoscopic capsule robot

Turan, M., Pilavci, Y. Y., Jamiruddin, R., Araujo, H., Konukoglu, E., Sitti, M.

arXiv preprint arXiv:1705.06524, 2017 (article)

Abstract
In the gastrointestinal (GI) tract endoscopy field, ingestible wireless capsule endoscopy is emerging as a novel, minimally invasive diagnostic technology for inspection of the GI tract and diagnosis of a wide range of diseases and pathologies. Since the development of this technology, medical device companies and many research groups have made substantial progress in converting passive capsule endoscopes to robotic active capsule endoscopes with most of the functionality of current active flexible endoscopes. However, robotic capsule endoscopy still has some challenges. In particular, the use of such devices to generate a precise three-dimensional (3D) mapping of the entire inner organ remains an unsolved problem. Such global 3D maps of inner organs would help doctors to detect the location and size of diseased areas more accurately and intuitively, thus permitting more reliable diagnoses. To our knowledge, this paper presents the first complete pipeline for a complete 3D visual map reconstruction of the stomach. The proposed pipeline is modular and includes a preprocessing module, an image registration module, and a final shape-from-shading-based 3D reconstruction module; the 3D map is primarily generated by a combination of image stitching and shape-from-shading techniques, and is updated in a frame-by-frame iterative fashion via capsule motion inside the stomach. A comprehensive quantitative analysis of the proposed 3D reconstruction method is performed using an esophagus gastro duodenoscopy simulator, three different endoscopic cameras, and a 3D optical scanner.

pi

link (url) Project Page [BibTex]


Thumb xl 9780262036436
Mobile Microrobotics

Sitti, M.

Mobile Microrobotics, pages: 304, The MIT Press, Cambridge, MA, 2017 (book)

Abstract
Progress in micro- and nano-scale science and technology has created a demand for new microsystems for high-impact applications in healthcare, biotechnology, manufacturing, and mobile sensor networks. The new robotics field of microrobotics has emerged to extend our interactions and explorations to sub-millimeter scales. This is the first textbook on micron-scale mobile robotics, introducing the fundamentals of design, analysis, fabrication, and control, and drawing on case studies of existing approaches. The book covers the scaling laws that can be used to determine the dominant forces and effects at the micron scale; models forces acting on microrobots, including surface forces, friction, and viscous drag; and describes such possible microfabrication techniques as photo-lithography, bulk micromachining, and deep reactive ion etching. It presents on-board and remote sensing methods, noting that remote sensors are currently more feasible; studies possible on-board microactuators; discusses self-propulsion methods that use self-generated local gradients and fields or biological cells in liquid environments; and describes remote microrobot actuation methods for use in limited spaces such as inside the human body. It covers possible on-board powering methods, indispensable in future medical and other applications; locomotion methods for robots on surfaces, in liquids, in air, and on fluid-air interfaces; and the challenges of microrobot localization and control, in particular multi-robot control methods for magnetic microrobots. Finally, the book addresses current and future applications, including noninvasive medical diagnosis and treatment, environmental remediation, and scientific tools.

pi

Mobile Microrobotics By Metin Sitti - Chapter 1 (PDF) link (url) [BibTex]

Mobile Microrobotics By Metin Sitti - Chapter 1 (PDF) link (url) [BibTex]


no image
New Directions for Learning with Kernels and Gaussian Processes (Dagstuhl Seminar 16481)

Gretton, A., Hennig, P., Rasmussen, C., Schölkopf, B.

Dagstuhl Reports, 6(11):142-167, 2017 (article)

ei pn

DOI [BibTex]

DOI [BibTex]


Thumb xl publications toc
Planning spin-walking locomotion for automatic grasping of microobjects by an untethered magnetic microgripper

Dong, X., Sitti, M.

In 2017 IEEE International Conference on Robotics and Automation (ICRA), pages: 6612-6618, 2017 (inproceedings)

Abstract
Most demonstrated mobile microrobot tasks so far have been achieved via pick-and-placing and dynamic trapping with teleoperation or simple path following algorithms. In our previous work, an untethered magnetic microgripper has been developed which has advanced functions, such as gripping objects. Both teleoperated manipulation in 2D and 3D have been demonstrated. However, it is challenging to control the magnetic microgripper to carry out manipulation tasks, because the grasping of objects so far in the literature relies heavily on teleoperation, which takes several minutes with even a skilled human expert. Here, we propose a new spin-walking locomotion and an automated 2D grasping motion planner for the microgripper, which enables time-efficient automatic grasping of microobjects that has not been achieved yet for untethered microrobots. In its locomotion, the microgripper repeatedly rotates about two principal axes to regulate its pose and move precisely on a surface. The motion planner could plan different motion primitives for grasping and compensate the uncertainties in the motion by learning the uncertainties and planning accordingly. We experimentally demonstrated that, using the proposed method, the microgripper could align to the target pose with error less than 0.1 body length and grip the objects within 40 seconds. Our method could significantly improve the time efficiency of micro-scale manipulation and have potential applications in microassembly and biomedical engineering.

pi

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Statistical Asymmetries Between Cause and Effect

Janzing, D.

In Time in Physics, pages: 129-139, Tutorials, Schools, and Workshops in the Mathematical Sciences, (Editors: Renner, Renato and Stupar, Sandra), Springer International Publishing, Cham, 2017 (inbook)

ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
A parametric texture model based on deep convolutional features closely matches texture appearance for humans

Wallis, T. S. A., Funke, C. M., Ecker, A. S., Gatys, L. A., Wichmann, F. A., Bethge, M.

Journal of Vision, 17(12), 2017 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
The tactile perception of transient changes in friction

Gueorguiev, D., Vezzoli, E., Mouraux, A., Lemaire-Semail, B., Thonnard, J.

Journal of The Royal Society Interface, 14(137), The Royal Society, 2017 (article)

Abstract
When we touch an object or explore a texture, frictional strains are induced by the tactile interactions with the surface of the object. Little is known about how these interactions are perceived, although it becomes crucial for the nascent industry of interactive displays with haptic feedback (e.g. smartphones and tablets) where tactile feedback based on friction modulation is particularly relevant. To investigate the human perception of frictional strains, we mounted a high-fidelity friction modulating ultrasonic device on a robotic platform performing controlled rubbing of the fingertip and asked participants to detect induced decreases of friction during a forced-choice task. The ability to perceive the changes in friction was found to follow Weber{\textquoteright}s Law of just noticeable differences, as it consistently depended on the ratio between the reduction in tangential force and the pre-stimulation tangential force. The Weber fraction was 0.11 in all conditions demonstrating a very high sensitivity to transient changes in friction. Humid fingers experienced less friction reduction than drier ones for the same intensity of ultrasonic vibration but the Weber fraction for detecting changes in friction was not influenced by the humidity of the skin.

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Data Collection for Robust End-to-End Lateral Vehicle Control

Geist, A. R., Hansen, A., Solowjow, E., Yang, S., Kreuzer, E.

In ASME 2017 Dynamic Systems and Control Conference, pages: V001T45A007-V001T45A007, 2017 (inproceedings)

[BibTex]

[BibTex]


Thumb xl screen shot 2018 05 04 at 11.37.41
Effect of Waveform on Tactile Perception by Electrovibration Displayed on Touch Screens

Vardar, Y., Güçlü, B., Basdogan, C.

IEEE Transactions on Haptics, 10(4):488-499, 2017 (article)

Abstract
In this study, we investigated the effect of input voltage waveform on our haptic perception of electrovibration on touch screens. Through psychophysical experiments performed with eight subjects, we first measured the detection thresholds of electrovibration stimuli generated by sinusoidal and square voltages at various fundamental frequencies. We observed that the subjects were more sensitive to stimuli generated by square wave voltage than sinusoidal one for frequencies lower than 60 Hz. Using Matlab simulations, we showed that the sensation difference of waveforms in low fundamental frequencies occurred due to the frequency-dependent electrical properties of human skin and human tactile sensitivity. To validate our simulations, we conducted a second experiment with another group of eight subjects. We first actuated the touch screen at the threshold voltages estimated in the first experiment and then measured the contact force and acceleration acting on the index fingers of the subjects moving on the screen with a constant speed. We analyzed the collected data in the frequency domain using the human vibrotactile sensitivity curve. The results suggested that Pacinian channel was the primary psychophysical channel in the detection of the electrovibration stimuli caused by all the square-wave inputs tested in this study. We also observed that the measured force and acceleration data were affected by finger speed in a complex manner suggesting that it may also affect our haptic perception accordingly.

hi

DOI [BibTex]

DOI [BibTex]


no image
Model Selection for Gaussian Mixture Models

Huang, T., Peng, H., Zhang, K.

Statistica Sinica, 27(1):147-169, 2017 (article)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
DiSMEC – Distributed Sparse Machines for Extreme Multi-label Classification

Babbar, R., Schölkopf, B.

Proceedings of the Tenth ACM International Conference on Web Search and Data Mining (WSDM 2017), pages: 721-729, 2017 (conference)

ei

DOI [BibTex]

DOI [BibTex]


Thumb xl web teaser eg
Sparse Inertial Poser: Automatic 3D Human Pose Estimation from Sparse IMUs

(Best Paper, Eurographics 2017)

Marcard, T. V., Rosenhahn, B., Black, M., Pons-Moll, G.

Computer Graphics Forum 36(2), Proceedings of the 38th Annual Conference of the European Association for Computer Graphics (Eurographics), pages: 349-360 , 2017 (article)

Abstract
We address the problem of making human motion capture in the wild more practical by using a small set of inertial sensors attached to the body. Since the problem is heavily under-constrained, previous methods either use a large number of sensors, which is intrusive, or they require additional video input. We take a different approach and constrain the problem by: (i) making use of a realistic statistical body model that includes anthropometric constraints and (ii) using a joint optimization framework to fit the model to orientation and acceleration measurements over multiple frames. The resulting tracker Sparse Inertial Poser (SIP) enables motion capture using only 6 sensors (attached to the wrists, lower legs, back and head) and works for arbitrary human motions. Experiments on the recently released TNT15 dataset show that, using the same number of sensors, SIP achieves higher accuracy than the dataset baseline without using any video data. We further demonstrate the effectiveness of SIP on newly recorded challenging motions in outdoor scenarios such as climbing or jumping over a wall

ps

video pdf [BibTex]

video pdf [BibTex]


no image
Frequency Peak Features for Low-Channel Classification in Motor Imagery Paradigms

Jayaram, V., Schölkopf, B., Grosse-Wentrup, M.

Proceedings of the 8th International IEEE/EMBS Conference on Neural Engineering (NER 2017), pages: 321-324, 2017 (conference)

ei

DOI [BibTex]

DOI [BibTex]


Thumb xl pami 2017 teaser
Efficient 2D and 3D Facade Segmentation using Auto-Context

Gadde, R., Jampani, V., Marlet, R., Gehler, P.

IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017 (article)

Abstract
This paper introduces a fast and efficient segmentation technique for 2D images and 3D point clouds of building facades. Facades of buildings are highly structured and consequently most methods that have been proposed for this problem aim to make use of this strong prior information. Contrary to most prior work, we are describing a system that is almost domain independent and consists of standard segmentation methods. We train a sequence of boosted decision trees using auto-context features. This is learned using stacked generalization. We find that this technique performs better, or comparable with all previous published methods and present empirical results on all available 2D and 3D facade benchmark datasets. The proposed method is simple to implement, easy to extend, and very efficient at test-time inference.

ps

arXiv Project Page [BibTex]

arXiv Project Page [BibTex]


Thumb xl web image
ClothCap: Seamless 4D Clothing Capture and Retargeting

Pons-Moll, G., Pujades, S., Hu, S., Black, M.

ACM Transactions on Graphics, (Proc. SIGGRAPH), 36(4), 2017, Two first authors contributed equally (article)

Abstract
Designing and simulating realistic clothing is challenging and, while several methods have addressed the capture of clothing from 3D scans, previous methods have been limited to single garments and simple motions, lack detail, or require specialized texture patterns. Here we address the problem of capturing regular clothing on fully dressed people in motion. People typically wear multiple pieces of clothing at a time. To estimate the shape of such clothing, track it over time, and render it believably, each garment must be segmented from the others and the body. Our ClothCap approach uses a new multi-part 3D model of clothed bodies, automatically segments each piece of clothing, estimates the naked body shape and pose under the clothing, and tracks the 3D deformations of the clothing over time. We estimate the garments and their motion from 4D scans; that is, high-resolution 3D scans of the subject in motion at 60 fps. The model allows us to capture a clothed person in motion, extract their clothing, and retarget the clothing to new body shapes. ClothCap provides a step towards virtual try-on with a technology for capturing, modeling, and analyzing clothing in motion.

ps

video project_page paper link (url) Project Page [BibTex]

video project_page paper link (url) Project Page [BibTex]


Thumb xl mobile microrobots for toc
Mobile microrobots for bioengineering applications

Ceylan, H., Giltinan, J., Kozielski, K., Sitti, M.

Lab on a Chip, 17(10):1705-1724, Royal Society of Chemistry, 2017 (article)

Abstract
Untethered micron-scale mobile robots can navigate and non-invasively perform specific tasks inside unprecedented and hard-to-reach inner human body sites and inside enclosed organ-on-a-chip microfluidic devices with live cells. They are aimed to operate robustly and safely in complex physiological environments where they will have a transforming impact in bioengineering and healthcare. Research along this line has already demonstrated significant progress, increasing attention, and high promise over the past several years. The first-generation microrobots, which could deliver therapeutics and other cargo to targeted specific body sites, have just been started to be tested inside small animals toward clinical use. Here, we review frontline advances in design, fabrication, and testing of untethered mobile microrobots for bioengineering applications. We convey the most impactful and recent strategies in actuation, mobility, sensing, and other functional capabilities of mobile microrobots, and discuss their potential advantages and drawbacks to operate inside complex, enclosed and physiologically relevant environments. We lastly draw an outlook to provide directions in the veins of more sophisticated designs and applications, considering biodegradability, immunogenicity, mobility, sensing, and possible medical interventions in complex microenvironments.

pi

DOI Project Page Project Page [BibTex]

DOI Project Page Project Page [BibTex]


no image
Likelihood-based parameter estimation and comparison of dynamical cognitive models

Schütt, H. H., Rothkegel, L. O. M., Trukenbrod, H. A., Reich, S., Wichmann, F. A., Engbert, R.

Psychological Review, 124(4):505-524, 2017 (article)

DOI [BibTex]

DOI [BibTex]


Thumb xl muvs
Towards Accurate Marker-less Human Shape and Pose Estimation over Time

Huang, Y., Bogo, F., Lassner, C., Kanazawa, A., Gehler, P. V., Romero, J., Akhter, I., Black, M. J.

In International Conference on 3D Vision (3DV), 2017 (inproceedings)

Abstract
Existing markerless motion capture methods often assume known backgrounds, static cameras, and sequence specific motion priors, limiting their application scenarios. Here we present a fully automatic method that, given multiview videos, estimates 3D human pose and body shape. We take the recently proposed SMPLify method [12] as the base method and extend it in several ways. First we fit a 3D human body model to 2D features detected in multi-view images. Second, we use a CNN method to segment the person in each image and fit the 3D body model to the contours, further improving accuracy. Third we utilize a generic and robust DCT temporal prior to handle the left and right side swapping issue sometimes introduced by the 2D pose estimator. Validation on standard benchmarks shows our results are comparable to the state of the art and also provide a realistic 3D shape avatar. We also demonstrate accurate results on HumanEva and on challenging monocular sequences of dancing from YouTube.

ps

Code pdf [BibTex]


no image
An image-computable psychophysical spatial vision model

Schütt, H. H., Wichmann, F. A.

Journal of Vision, 17(12), 2017 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Methods and measurements to compare men against machines

Wichmann, F. A., Janssen, D. H. J., Geirhos, R., Aguilar, G., Schütt, H. H., Maertens, M., Bethge, M.

Human Vision and Electronic Imaging (HVEI 2016), pages: 36-45, Society for Imaging Science and Technology, 2017 (conference)

ei

DOI [BibTex]

DOI [BibTex]


Thumb xl screen shot 2018 02 08 at 12.58.55 pm
Is Growing Good for Learning?

Heim, S., Spröwitz, A.

Proceedings of the 8th International Symposium on Adaptive Motion of Animals and Machines AMAM2017, 2017 (conference)

dlg

[BibTex]

[BibTex]


no image
Surface tension-driven self-alignment

Mastrangeli, M., Zhou, Q., Sariola, V., Lambert, P.

Soft Matter, 13, pages: 304-327, The Royal Society of Chemistry, 2017 (article)

Abstract
Surface tension-driven self-alignment is a passive and highly-accurate positioning mechanism that can significantly simplify and enhance the construction of advanced microsystems. After years of research{,} demonstrations and developments{,} the surface engineering and manufacturing technology enabling capillary self-alignment has achieved a degree of maturity conducive to a successful transfer to industrial practice. In view of this transition{,} a broad and accessible review of the physics{,} material science and applications of capillary self-alignment is presented. Statics and dynamics of the self-aligning action of deformed liquid bridges are explained through simple models and experiments{,} and all fundamental aspects of surface patterning and conditioning{,} of choice{,} deposition and confinement of liquids{,} and of component feeding and interconnection to substrates are illustrated through relevant applications in micro- and nanotechnology. A final outline addresses remaining challenges and additional extensions envisioned to further spread the use and fully exploit the potential of the technique.

pi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl paraview preview
Design of a visualization scheme for functional connectivity data of Human Brain

Bramlage, L.

Hochschule Osnabrück - University of Applied Sciences, 2017 (thesis)

sf

Bramlage_BSc_2017.pdf [BibTex]


Thumb xl auroteaser
Decentralized Simultaneous Multi-target Exploration using a Connected Network of Multiple Robots

Nestmeyer, T., Robuffo Giordano, P., Bülthoff, H. H., Franchi, A.

In pages: 989-1011, Autonomous Robots, 2017 (incollection)

ps

[BibTex]

[BibTex]


no image
Embedded spherical localization for micro underwater vehicles based on attenuation of electro-magnetic carrier signals

Duecker, D., Geist, A. R., Hengeler, M., Kreuzer, E., Pick, M., Rausch, V., Solowjow, E.

Sensors, 17(5):959, Multidisciplinary Digital Publishing Institute, 2017 (article)

[BibTex]

[BibTex]


Thumb xl toc image patent
Methods, apparatuses, and systems for micromanipulation with adhesive fibrillar structures

Sitti, M., Mengüç, Y.

US Patent 9,731,422, 2017 (patent)

Abstract
The present invention are methods for fabrication of micro- and/or nano-scale adhesive fibers and their use for movement and manipulation of objects. Further disclosed is a method of manipulating a part by providing a manipulation device with a plurality of fibers, where each fiber has a tip with a flat surface that is parallel to a backing layer, contacting the flat surfaces on an object, moving the object to a new location, then disengaging the tips from the object.

pi

link (url) [BibTex]


no image
Kernel-based tests for joint independence

Pfister, N., Bühlmann, P., Schölkopf, B., Peters, J.

Journal of the Royal Statistical Society: Series B (Statistical Methodology), 80(1):5-31, 2017 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
End-to-End Learning for Image Burst Deblurring

Wieschollek, P., Schölkopf, B., Lensch, H. P. A., Hirsch, M.

Computer Vision - ACCV 2016 - 13th Asian Conference on Computer Vision, 10114, pages: 35-51, Image Processing, Computer Vision, Pattern Recognition, and Graphics, (Editors: Lai, S.-H., Lepetit, V., Nishino, K., and Sato, Y. ), Springer, 2017 (conference)

ei

[BibTex]

[BibTex]