Header logo is


2019


Soft-magnetic coatings as possible sensors for magnetic imaging of superconductors
Soft-magnetic coatings as possible sensors for magnetic imaging of superconductors

Ionescu, A., Simmendinger, J., Bihler, M., Miksch, C., Fischer, P., Soltan, S., Schütz, G., Albrecht, J.

Supercond. Sci. and Tech., 33, pages: 015002, IOP, December 2019 (article)

Abstract
Magnetic imaging of superconductors typically requires a soft-magnetic material placed on top of the superconductor to probe local magnetic fields. For reasonable results the influence of the magnet onto the superconductor has to be small. Thin YBCO films with soft-magnetic coatings are investigated using SQUID magnetometry. Detailed measurements of the magnetic moment as a function of temperature, magnetic field and time have been performed for different heterostructures. It is found that the modification of the superconducting transport in these heterostructures strongly depends on the magnetic and structural properties of the soft-magnetic material. This effect is especially pronounced for an inhomogeneous coating consisting of ferromagnetic nanoparticles.

pf mms

link (url) DOI [BibTex]

2019


link (url) DOI [BibTex]


HPLC of monolayer-protected Gold clusters with baseline separation
HPLC of monolayer-protected Gold clusters with baseline separation

Knoppe, S., Vogt, P.

Analytical Chemistry, 91, pages: 1603, December 2019 (article)

Abstract
The properties of ultrasmall metal nanoparticles (ca. 10–200 metal atoms), or monolayer-protected metal clusters (MPCs), drastically depend on their atomic structure. For systematic characterization and application, assessment of their purity is of high importance. Currently, the gold standard for purity control of MPCs is mass spectrometry (MS). Mass spectrometry, however, cannot always detect small impurities; MS of certain clusters, for example, ESI-TOF of Au40(SR)24, is not successful at all. We here present a simple reversed-phase HPLC method for purity control of a series of small alkanethiolate-protected gold clusters. The method allows the detection of small impurities with high sensitivity. Linear correlation between alkyl chain length of Au25(SC_n H_(2n+1))18 clusters (n = 6, 8, 10, 12) and their retention time was noticed.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Attacking Optical Flow
Attacking Optical Flow

Ranjan, A., Janai, J., Geiger, A., Black, M. J.

In Proceedings International Conference on Computer Vision (ICCV), pages: 2404-2413, IEEE, 2019 IEEE/CVF International Conference on Computer Vision (ICCV), November 2019, ISSN: 2380-7504 (inproceedings)

Abstract
Deep neural nets achieve state-of-the-art performance on the problem of optical flow estimation. Since optical flow is used in several safety-critical applications like self-driving cars, it is important to gain insights into the robustness of those techniques. Recently, it has been shown that adversarial attacks easily fool deep neural networks to misclassify objects. The robustness of optical flow networks to adversarial attacks, however, has not been studied so far. In this paper, we extend adversarial patch attacks to optical flow networks and show that such attacks can compromise their performance. We show that corrupting a small patch of less than 1% of the image size can significantly affect optical flow estimates. Our attacks lead to noisy flow estimates that extend significantly beyond the region of the attack, in many cases even completely erasing the motion of objects in the scene. While networks using an encoder-decoder architecture are very sensitive to these attacks, we found that networks using a spatial pyramid architecture are less affected. We analyse the success and failure of attacking both architectures by visualizing their feature maps and comparing them to classical optical flow techniques which are robust to these attacks. We also demonstrate that such attacks are practical by placing a printed pattern into real scenes.

avg ps

Video Project Page Paper Supplementary Material link (url) DOI [BibTex]

Video Project Page Paper Supplementary Material link (url) DOI [BibTex]


Acoustic hologram enhanced phased arrays for ultrasonic particle manipulation
Acoustic hologram enhanced phased arrays for ultrasonic particle manipulation

Cox, L., Melde, K., Croxford, A., Fischer, P., Drinkwater, B.

Phys. Rev. Applied, 12, pages: 064055, November 2019 (article)

Abstract
The ability to shape ultrasound fields is important for particle manipulation, medical therapeutics and imaging applications. If the amplitude and/or phase is spatially varied across the wavefront then it is possible to project ‘acoustic images’. When attempting to form an arbitrary desired static sound field, acoustic holograms are superior to phased arrays due to their significantly higher phase fidelity. However, they lack the dynamic flexibility of phased arrays. Here, we demonstrate how to combine the high-fidelity advantages of acoustic holograms with the dynamic control of phased arrays in the ultrasonic frequency range. Holograms are used with a 64-element phased array, driven with continuous excitation. Moving the position of the projected hologram via phase delays which steer the output beam is demonstrated experimentally. This allows the creation of a much more tightly focused point than with the phased array alone, whilst still being reconfigurable. It also allows the complex movement at a water-air interface of a “phase surfer” along a phase track or the manipulation of a more arbitrarily shaped particle via amplitude traps. Furthermore, a particle manipulation device with two emitters and a single split hologram is demonstrated that allows the positioning of a “phase surfer” along a 1D axis. This paper opens the door for new applications with complex manipulation of ultrasound whilst minimising the complexity and cost of the apparatus.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


A Helical Microrobot with an Optimized Propeller-Shape for Propulsion in Viscoelastic Biological Media
A Helical Microrobot with an Optimized Propeller-Shape for Propulsion in Viscoelastic Biological Media

Li., D., Jeong, M., Oren, E., Yu, T., Qiu, T.

Robotics, 8, pages: 87, MDPI, October 2019 (article)

Abstract
One major challenge for microrobots is to penetrate and effectively move through viscoelastic biological tissues. Most existing microrobots can only propel in viscous liquids. Recent advances demonstrate that sub-micron robots can actively penetrate nanoporous biological tissue, such as the vitreous of the eye. However, it is still difficult to propel a micron-sized device through dense biological tissue. Here, we report that a special twisted helical shape together with a high aspect ratio in cross-section permit a microrobot with a diameter of hundreds-of-micrometers to move through mouse liver tissue. The helical microrobot is driven by a rotating magnetic field and localized by ultrasound imaging inside the tissue. The twisted ribbon is made of molybdenum and a sharp tip is chemically etched to generate a higher pressure at the edge of the propeller to break the biopolymeric network of the dense tissue.

pf

link (url) DOI [BibTex]


Acoustic Holographic Cell Patterning in a Biocompatible Hydrogel
Acoustic Holographic Cell Patterning in a Biocompatible Hydrogel

Ma, Z., Holle, A., Melde, K., Qiu, T., Poeppel, K., Kadiri, V., Fischer, P.

Adv. Mat., 32(1904181), October 2019 (article)

Abstract
Acoustophoresis is promising as a rapid, biocompatible, non-contact cell manipulation method, where cells are arranged along the nodes or antinodes of the acoustic field. Typically, the acoustic field is formed in a resonator, which results in highly symmetric regular patterns. However, arbitrary, non-symmetrically shaped cell assemblies are necessary to obtain the irregular cellular arrangements found in biological tissues. We show that arbitrarily shaped cell patterns can be obtained from the complex acoustic field distribution defined by an acoustic hologram. Attenuation of the sound field induces localized acoustic streaming and the resultant convection flow gently delivers the suspended cells to the image plane where they form the designed pattern. We show that the process can be implemented in a biocompatible collagen solution, which can then undergo gelation to immobilize the cell pattern inside the viscoelastic matrix. The patterned cells exhibit F-actin-based protrusions, which indicates that the cells grow and thrive within the matrix. Cell viability assays and brightfield imaging after one week confirm cell survival and that the patterns persist. Acoustophoretic cell manipulation by holographic fields thus holds promise for non-contact, long-range, long-term cellular pattern formation, with a wide variety of potential applications in tissue engineering and mechanobiology.

pf

link (url) DOI [BibTex]


Occupancy Flow: 4D Reconstruction by Learning Particle Dynamics
Occupancy Flow: 4D Reconstruction by Learning Particle Dynamics

Niemeyer, M., Mescheder, L., Oechsle, M., Geiger, A.

International Conference on Computer Vision, October 2019 (conference)

Abstract
Deep learning based 3D reconstruction techniques have recently achieved impressive results. However, while state-of-the-art methods are able to output complex 3D geometry, it is not clear how to extend these results to time-varying topologies. Approaches treating each time step individually lack continuity and exhibit slow inference, while traditional 4D reconstruction methods often utilize a template model or discretize the 4D space at fixed resolution. In this work, we present Occupancy Flow, a novel spatio-temporal representation of time-varying 3D geometry with implicit correspondences. Towards this goal, we learn a temporally and spatially continuous vector field which assigns a motion vector to every point in space and time. In order to perform dense 4D reconstruction from images or sparse point clouds, we combine our method with a continuous 3D representation. Implicitly, our model yields correspondences over time, thus enabling fast inference while providing a sound physical description of the temporal dynamics. We show that our method can be used for interpolation and reconstruction tasks, and demonstrate the accuracy of the learned correspondences. We believe that Occupancy Flow is a promising new 4D representation which will be useful for a variety of spatio-temporal reconstruction tasks.

avg

pdf poster suppmat code Project page video blog [BibTex]


Texture Fields: Learning Texture Representations in Function Space
Texture Fields: Learning Texture Representations in Function Space

Oechsle, M., Mescheder, L., Niemeyer, M., Strauss, T., Geiger, A.

International Conference on Computer Vision, October 2019 (conference)

Abstract
In recent years, substantial progress has been achieved in learning-based reconstruction of 3D objects. At the same time, generative models were proposed that can generate highly realistic images. However, despite this success in these closely related tasks, texture reconstruction of 3D objects has received little attention from the research community and state-of-the-art methods are either limited to comparably low resolution or constrained experimental setups. A major reason for these limitations is that common representations of texture are inefficient or hard to interface for modern deep learning techniques. In this paper, we propose Texture Fields, a novel texture representation which is based on regressing a continuous 3D function parameterized with a neural network. Our approach circumvents limiting factors like shape discretization and parameterization, as the proposed texture representation is independent of the shape representation of the 3D object. We show that Texture Fields are able to represent high frequency texture and naturally blend with modern deep learning techniques. Experimentally, we find that Texture Fields compare favorably to state-of-the-art methods for conditional texture reconstruction of 3D objects and enable learning of probabilistic generative models for texturing unseen 3D models. We believe that Texture Fields will become an important building block for the next generation of generative 3D models.

avg

pdf suppmat video poster blog Project Page [BibTex]


no image
Dynamics of beneficial epidemics

Berdahl, A., Brelsford, C., De Bacco, C., Dumas, M., Ferdinand, V., Grochow, J. A., nt Hébert-Dufresne, L., Kallus, Y., Kempes, C. P., Kolchinsky, A., Larremore, D. B., Libby, E., Power, E. A., A., S. C., Tracey, B. D.

Scientific Reports, 9, pages: 15093, October 2019 (article)

pio

DOI [BibTex]

DOI [BibTex]


Arrays of plasmonic nanoparticle dimers with defined nanogap spacers
Arrays of plasmonic nanoparticle dimers with defined nanogap spacers

Jeong, H., Adams, M. C., Guenther, J., Alarcon-Correa, M., Kim, I., Choi, E., Miksch, C., Mark, A. F. M., Mark, A. G., Fischer, P.

ACS Nano, 13, pages: 11453-11459, September 2019 (article)

Abstract
Plasmonic molecules are building blocks of metallic nanostructures that give rise to intriguing optical phenomena with similarities to those seen in molecular systems. The ability to design plasmonic hybrid structures and molecules with nanometric resolution would enable applications in optical metamaterials and sensing that presently cannot be demonstrated, because of a lack of suitable fabrication methods allowing the structural control of the plasmonic atoms on a large scale. Here we demonstrate a wafer-scale “lithography-free” parallel fabrication scheme to realize nanogap plasmonic meta-molecules with precise control over their size, shape, material, and orientation. We demonstrate how we can tune the corresponding coupled resonances through the entire visible spectrum. Our fabrication method, based on glancing angle physical vapor deposition with gradient shadowing, permits critical parameters to be varied across the wafer and thus is ideally suited to screen potential structures. We obtain billions of aligned dimer structures with controlled variation of the spectral properties across the wafer. We spectroscopically map the plasmonic resonances of gold dimer structures and show that they not only are in good agreement with numerically modeled spectra, but also remain functional, at least for a year, in ambient conditions.

pf

link (url) DOI [BibTex]


NoVA: Learning to See in Novel Viewpoints and Domains
NoVA: Learning to See in Novel Viewpoints and Domains

Coors, B., Condurache, A. P., Geiger, A.

In 2019 International Conference on 3D Vision (3DV), pages: 116-125, IEEE, 2019 International Conference on 3D Vision (3DV), September 2019 (inproceedings)

Abstract
Domain adaptation techniques enable the re-use and transfer of existing labeled datasets from a source to a target domain in which little or no labeled data exists. Recently, image-level domain adaptation approaches have demonstrated impressive results in adapting from synthetic to real-world environments by translating source images to the style of a target domain. However, the domain gap between source and target may not only be caused by a different style but also by a change in viewpoint. This case necessitates a semantically consistent translation of source images and labels to the style and viewpoint of the target domain. In this work, we propose the Novel Viewpoint Adaptation (NoVA) model, which enables unsupervised adaptation to a novel viewpoint in a target domain for which no labeled data is available. NoVA utilizes an explicit representation of the 3D scene geometry to translate source view images and labels to the target view. Experiments on adaptation to synthetic and real-world datasets show the benefit of NoVA compared to state-of-the-art domain adaptation approaches on the task of semantic segmentation.

avg

pdf suppmat poster video DOI [BibTex]

pdf suppmat poster video DOI [BibTex]


Genetically modified M13 bacteriophage nanonets for enzyme catalysis and recovery
Genetically modified M13 bacteriophage nanonets for enzyme catalysis and recovery

Kadiri, V. M., Alarcon-Correa, M., Guenther, J. P., Ruppert, J., Bill, J., Rothenstein, D., Fischer, P.

Catalysts, 9, pages: 723, August 2019 (article)

Abstract
Enzyme-based biocatalysis exhibits multiple advantages over inorganic catalysts, including the biocompatibility and the unchallenged specificity of enzymes towards their substrate. The recovery and repeated use of enzymes is essential for any realistic application in biotechnology, but is not easily achieved with current strategies. For this purpose, enzymes are often immobilized on inorganic scaffolds, which could entail a reduction of the enzymes’ activity. Here, we show that immobilization to a nano-scaled biological scaffold, a nanonetwork of end-to-end cross-linked M13 bacteriophages, ensures high enzymatic activity and at the same time allows for the simple recovery of the enzymes. The bacteriophages have been genetically engineered to express AviTags at their ends, which permit biotinylation and their specific end-to-end self-assembly while allowing space on the major coat protein for enzyme coupling. We demonstrate that the phages form nanonetwork structures and that these so-called nanonets remain highly active even after re-using the nanonets multiple times in a flow-through reactor.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Light-controlled micromotors and soft microrobots
Light-controlled micromotors and soft microrobots

Palagi, S., Singh, D. P., Fischer, P.

Adv. Opt. Mat., 7, pages: 1900370, August 2019 (article)

Abstract
Mobile microscale devices and microrobots can be powered by catalytic reactions (chemical micromotors) or by external fields. This report is focused on the role of light as a versatile means for wirelessly powering and controlling such microdevices. Recent advances in the development of autonomous micromotors are discussed, where light permits their actuation with unprecedented control and thereby enables advances in the field of active matter. In addition, structuring the light field is a new means to drive soft microrobots that are based on (photo‐) responsive polymers. The behavior of the two main classes of thermo‐ and photoresponsive polymers adopted in microrobotics (poly(N‐isopropylacrylamide) and liquid‐crystal elastomers) is analyzed, and recent applications are reported. The advantages and limitations of controlling micromotors and microrobots by light are reviewed, and some of the remaining challenges in the development of novel photo‐active materials for micromotors and microrobots are discussed.

pf

link (url) DOI [BibTex]


Soft Continuous Surface for Micromanipulation driven by Light-controlled Hydrogels
Soft Continuous Surface for Micromanipulation driven by Light-controlled Hydrogels

Choi, E., Jeong, H., Qiu, T., Fischer, P., Palagi, S.

4th IEEE International Conference on Manipulation, Automation and Robotics at Small Scales (MARSS), July 2019 (conference)

Abstract
Remotely controlled, automated actuation and manipulation at the microscale is essential for a number of micro-manufacturing, biology, and lab-on-a-chip applications. To transport and manipulate micro-objects, arrays of remotely controlled micro-actuators are required, which, in turn, typically require complex and expensive solid-state chips. Here, we show that a continuous surface can function as a highly parallel, many-degree of freedom, wirelessly-controlled microactuator with seamless deformation. The soft continuous surface is based on a hydrogel that undergoes a volume change in response to applied light. The fabrication of the hydrogels and the characterization of their optical and thermomechanical behaviors are reported. The temperature-dependent localized deformation of the hydrogel is also investigated by numerical simulations. Static and dynamic deformations are obtained in the soft material by projecting light fields at high spatial resolution onto the surface. By controlling such deformations in open loop and especially closed loop, automated photoactuation is achieved. The surface deformations are then exploited to examine how inert microbeads can be manipulated autonomously on the surface. We believe that the proposed approach suggests ways to implement universal 2D micromanipulation schemes that can be useful for automation in microfabrication and lab-on-a-chip applications.

pf

[BibTex]

[BibTex]


Soft Phantom for the Training of Renal Calculi Diagnostics and  Lithotripsy
Soft Phantom for the Training of Renal Calculi Diagnostics and Lithotripsy

Li., D., Suarez-Ibarrola, R., Choi, E., Jeong, M., Gratzke, C., Miernik, A., Fischer, P., Qiu, T.

41st Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), July 2019 (conference)

Abstract
Organ models are important for medical training and surgical planning. With the fast development of additive fabrication technologies, including 3D printing, the fabrication of 3D organ phantoms with precise anatomical features becomes possible. Here, we develop the first high-resolution kidney phantom based on soft material assembly, by combining 3D printing and polymer molding techniques. The phantom exhibits both the detailed anatomy of a human kidney and the elasticity of soft tissues. The phantom assembly can be separated into two parts on the coronal plane, thus large renal calculi are readily placed at any desired location of the calyx. With our sealing method, the assembled phantom withstands a hydraulic pressure that is four times the normal intrarenal pressure, thus it allows the simulation of medical procedures under realistic pressure conditions. The medical diagnostics of the renal calculi is performed by multiple imaging modalities, including X-ray, ultrasound imaging and endoscopy. The endoscopic lithotripsy is also successfully performed on the phantom. The use of a multifunctional soft phantom assembly thus shows great promise for the simulation of minimally invasive medical procedures under realistic conditions.

pf

[BibTex]

[BibTex]


Superior Magnetic Performance in FePt L1_0 Nanomaterials
Superior Magnetic Performance in FePt L1_0 Nanomaterials

Son, K., Ryu, G. H., Jeong, H., Fink, L., Merz, M., Nagel, P., Schuppler, S., Richter, G., Goering, E., Schütz, G.

Small, 15(1902353), July 2019 (article)

Abstract
The discovery of the high maximum energy product of 59 MGOe for NdFeB magnets is a breakthrough in the development of permanent magnets with a tremendous impact in many fields of technology. This value is still the world record, for 40 years. This work reports on a reliable and robust route to realize nearly perfectly ordered L1_0-phase FePt nanoparticles, leading to an unprecedented energy product of 80 MGOe at room temperature. Furthermore, with a 3 nm Au coverage, the magnetic polarization of these nanomagnets can be enhanced by 25% exceeding 1.8 T. This exceptional magnetization and anisotropy is confirmed by using multiple imaging and spectroscopic methods, which reveal highly consistent results. Due to the unprecedented huge energy product, this material can be envisaged as a new advanced basic magnetic component in modern micro and nanosized devices.

pf mms

link (url) DOI [BibTex]

link (url) DOI [BibTex]


A Magnetic Actuation System for the  Active Microrheology in Soft Biomaterials
A Magnetic Actuation System for the Active Microrheology in Soft Biomaterials

Jeong, M., Choi, E., Li., D., Palagi, S., Fischer, P., Qiu, T.

4th IEEE International Conference on Manipulation, Automation and Robotics at Small Scales (MARSS), July 2019 (conference)

Abstract
Microrheology is a key technique to characterize soft materials at small scales. The microprobe is wirelessly actuated and therefore typically only low forces or torques can be applied, which limits the range of the applied strain. Here, we report a new magnetic actuation system for microrheology consisting of an array of rotating permanent magnets, which achieves a rotating magnetic field with a spatially homogeneous high field strength of ~100 mT in a working volume of ~20×20×20 mm3. Compared to a traditional electromagnetic coil system, the permanent magnet assembly is portable and does not require cooling, and it exerts a large magnetic torque on the microprobe that is an order of magnitude higher than previous setups. Experimental results demonstrate that the measurement range of the soft gels’ elasticity covers at least five orders of magnitude. With the large actuation torque, it is also possible to study the fracture mechanics of soft biomaterials at small scales.

pf

[BibTex]

[BibTex]


Taking a Deeper Look at the Inverse Compositional Algorithm
Taking a Deeper Look at the Inverse Compositional Algorithm

Lv, Z., Dellaert, F., Rehg, J. M., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2019, June 2019 (inproceedings)

Abstract
In this paper, we provide a modern synthesis of the classic inverse compositional algorithm for dense image alignment. We first discuss the assumptions made by this well-established technique, and subsequently propose to relax these assumptions by incorporating data-driven priors into this model. More specifically, we unroll a robust version of the inverse compositional algorithm and replace multiple components of this algorithm using more expressive models whose parameters we train in an end-to-end fashion from data. Our experiments on several challenging 3D rigid motion estimation tasks demonstrate the advantages of combining optimization with learning-based techniques, outperforming the classic inverse compositional algorithm as well as data-driven image-to-pose regression approaches.

avg

pdf suppmat Video Project Page Poster [BibTex]

pdf suppmat Video Project Page Poster [BibTex]


MOTS: Multi-Object Tracking and Segmentation
MOTS: Multi-Object Tracking and Segmentation

Voigtlaender, P., Krause, M., Osep, A., Luiten, J., Sekar, B. B. G., Geiger, A., Leibe, B.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2019, June 2019 (inproceedings)

Abstract
This paper extends the popular task of multi-object tracking to multi-object tracking and segmentation (MOTS). Towards this goal, we create dense pixel-level annotations for two existing tracking datasets using a semi-automatic annotation procedure. Our new annotations comprise 65,213 pixel masks for 977 distinct objects (cars and pedestrians) in 10,870 video frames. For evaluation, we extend existing multi-object tracking metrics to this new task. Moreover, we propose a new baseline method which jointly addresses detection, tracking, and segmentation with a single convolutional network. We demonstrate the value of our datasets by achieving improvements in performance when training on MOTS annotations. We believe that our datasets, metrics and baseline will become a valuable resource towards developing multi-object tracking approaches that go beyond 2D bounding boxes.

avg

pdf suppmat Project Page Poster Video Project Page [BibTex]

pdf suppmat Project Page Poster Video Project Page [BibTex]


PointFlowNet: Learning Representations for Rigid Motion Estimation from Point Clouds
PointFlowNet: Learning Representations for Rigid Motion Estimation from Point Clouds

Behl, A., Paschalidou, D., Donne, S., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2019, June 2019 (inproceedings)

Abstract
Despite significant progress in image-based 3D scene flow estimation, the performance of such approaches has not yet reached the fidelity required by many applications. Simultaneously, these applications are often not restricted to image-based estimation: laser scanners provide a popular alternative to traditional cameras, for example in the context of self-driving cars, as they directly yield a 3D point cloud. In this paper, we propose to estimate 3D motion from such unstructured point clouds using a deep neural network. In a single forward pass, our model jointly predicts 3D scene flow as well as the 3D bounding box and rigid body motion of objects in the scene. While the prospect of estimating 3D scene flow from unstructured point clouds is promising, it is also a challenging task. We show that the traditional global representation of rigid body motion prohibits inference by CNNs, and propose a translation equivariant representation to circumvent this problem. For training our deep network, a large dataset is required. Because of this, we augment real scans from KITTI with virtual objects, realistically modeling occlusions and simulating sensor noise. A thorough comparison with classic and learning-based techniques highlights the robustness of the proposed approach.

avg

pdf suppmat Project Page Poster Video [BibTex]

pdf suppmat Project Page Poster Video [BibTex]


Learning Non-volumetric Depth Fusion using Successive Reprojections
Learning Non-volumetric Depth Fusion using Successive Reprojections

Donne, S., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2019, June 2019 (inproceedings)

Abstract
Given a set of input views, multi-view stereopsis techniques estimate depth maps to represent the 3D reconstruction of the scene; these are fused into a single, consistent, reconstruction -- most often a point cloud. In this work we propose to learn an auto-regressive depth refinement directly from data. While deep learning has improved the accuracy and speed of depth estimation significantly, learned MVS techniques remain limited to the planesweeping paradigm. We refine a set of input depth maps by successively reprojecting information from neighbouring views to leverage multi-view constraints. Compared to learning-based volumetric fusion techniques, an image-based representation allows significantly more detailed reconstructions; compared to traditional point-based techniques, our method learns noise suppression and surface completion in a data-driven fashion. Due to the limited availability of high-quality reconstruction datasets with ground truth, we introduce two novel synthetic datasets to (pre-)train our network. Our approach is able to improve both the output depth maps and the reconstructed point cloud, for both learned and traditional depth estimation front-ends, on both synthetic and real data.

avg

pdf suppmat Project Page Video Poster blog [BibTex]

pdf suppmat Project Page Video Poster blog [BibTex]


Connecting the Dots: Learning Representations for Active Monocular Depth Estimation
Connecting the Dots: Learning Representations for Active Monocular Depth Estimation

Riegler, G., Liao, Y., Donne, S., Koltun, V., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2019, June 2019 (inproceedings)

Abstract
We propose a technique for depth estimation with a monocular structured-light camera, \ie, a calibrated stereo set-up with one camera and one laser projector. Instead of formulating the depth estimation via a correspondence search problem, we show that a simple convolutional architecture is sufficient for high-quality disparity estimates in this setting. As accurate ground-truth is hard to obtain, we train our model in a self-supervised fashion with a combination of photometric and geometric losses. Further, we demonstrate that the projected pattern of the structured light sensor can be reliably separated from the ambient information. This can then be used to improve depth boundaries in a weakly supervised fashion by modeling the joint statistics of image and depth edges. The model trained in this fashion compares favorably to the state-of-the-art on challenging synthetic and real-world datasets. In addition, we contribute a novel simulator, which allows to benchmark active depth prediction algorithms in controlled conditions.

avg

pdf suppmat Poster Project Page [BibTex]

pdf suppmat Poster Project Page [BibTex]


Superquadrics Revisited: Learning 3D Shape Parsing beyond Cuboids
Superquadrics Revisited: Learning 3D Shape Parsing beyond Cuboids

Paschalidou, D., Ulusoy, A. O., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2019, June 2019 (inproceedings)

Abstract
Abstracting complex 3D shapes with parsimonious part-based representations has been a long standing goal in computer vision. This paper presents a learning-based solution to this problem which goes beyond the traditional 3D cuboid representation by exploiting superquadrics as atomic elements. We demonstrate that superquadrics lead to more expressive 3D scene parses while being easier to learn than 3D cuboid representations. Moreover, we provide an analytical solution to the Chamfer loss which avoids the need for computational expensive reinforcement learning or iterative prediction. Our model learns to parse 3D objects into consistent superquadric representations without supervision. Results on various ShapeNet categories as well as the SURREAL human body dataset demonstrate the flexibility of our model in capturing fine details and complex poses that could not have been modelled using cuboids.

avg

Project Page Poster suppmat pdf Video blog handout [BibTex]

Project Page Poster suppmat pdf Video blog handout [BibTex]


The acoustic hologram and particle manipulation with structured acoustic fields
The acoustic hologram and particle manipulation with structured acoustic fields

Melde, K.

Karlsruher Institut für Technologie (KIT), May 2019 (phdthesis)

Abstract
This thesis presents holograms as a novel approach to create arbitrary ultrasound fields. It is shown how any wavefront can simply be encoded in the thickness profile of a phase plate. Contemporary 3D-printers enable fabrication of structured surfaces with feature sizes corresponding to wavelengths of ultrasound up to 7.5 MHz in water—covering the majority of medical and industrial applications. The whole workflow for designing and creating acoustic holograms has been developed and is presented in this thesis. To reconstruct the encoded fields a single transducer element is sufficient. Arbitrary fields are demonstrated in transmission and reflection configurations in water and air and validated by extensive hydrophone scans. To complement these time-consuming measurements a new approach, based on thermography, is presented, which enables volumetric sound field scans in just a few seconds. Several original experiments demonstrate the advantages of using acoustic holograms for particle manipulation. Most notably, directed parallel assembly of microparticles in the shape of a projected acoustic image has been shown and extended to a fabrication method by fusing the particles in a polymerization reaction. Further, seemingly dynamic propulsion from a static hologram is demonstrated by controlling the phase gradient along a projected track. The necessary complexity to create ultrasound fields with set amplitude and phase distributions is easily managed using acoustic holograms. The acoustic hologram is a simple and cost-effective tool for shaping ultrasound fields with high-fidelity. It is expected to have an impact in many applications where ultrasound is employed.

pf

link (url) DOI [BibTex]


Recent advances in gold nanoparticles forbiomedical applications: from hybrid structuresto multi-functionality
Recent advances in gold nanoparticles forbiomedical applications: from hybrid structuresto multi-functionality

Jeong, H., Choi, E., Ellis, E., Lee, T.

J. of Mat. Chem. B, 7, pages: 3480, May 2019 (article)

Abstract
Gold nanoparticles (Au NPs) are arguably the most versatile nanomaterials reported to date. Recentadvances in nanofabrication and chemical synthesis have expanded the scope of Au NPs from classicalhomogeneous nanospheres to a wide range of hybrid nanostructures with programmable size, shapeand composition. Novel physiochemical properties can be achievedviadesign and engineering of thehybrid nanostructures. In this review we discuss the recent progress in the development of complexhybrid Au NPs and propose a classification framework based on three fundamental structuraldimensions (length scale, complexity and symmetry) to aid categorising, comparing and designingvarious types of Au NPs. Their novel functions and potential for biomedical applications will also bediscussed, featuring point-of-care diagnostics by advanced optical spectroscopy and assays, as well asminimally invasive surgeries and targeted drug delivery using multifunctional nano-robot

pf

link (url) DOI [BibTex]


Real-Time Dense Mapping for Self-Driving Vehicles using Fisheye Cameras
Real-Time Dense Mapping for Self-Driving Vehicles using Fisheye Cameras

Cui, Z., Heng, L., Yeo, Y. C., Geiger, A., Pollefeys, M., Sattler, T.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2019, IEEE, International Conference on Robotics and Automation, May 2019 (inproceedings)

Abstract
We present a real-time dense geometric mapping algorithm for large-scale environments. Unlike existing methods which use pinhole cameras, our implementation is based on fisheye cameras which have larger field of view and benefit some other tasks including Visual-Inertial Odometry, localization and object detection around vehicles. Our algorithm runs on in-vehicle PCs at 15 Hz approximately, enabling vision-only 3D scene perception for self-driving vehicles. For each synchronized set of images captured by multiple cameras, we first compute a depth map for a reference camera using plane-sweeping stereo. To maintain both accuracy and efficiency, while accounting for the fact that fisheye images have a rather low resolution, we recover the depths using multiple image resolutions. We adopt the fast object detection framework YOLOv3 to remove potentially dynamic objects. At the end of the pipeline, we fuse the fisheye depth images into the truncated signed distance function (TSDF) volume to obtain a 3D map. We evaluate our method on large-scale urban datasets, and results show that our method works well even in complex environments.

avg

pdf video poster Project Page [BibTex]

pdf video poster Project Page [BibTex]


Project AutoVision: Localization and 3D Scene Perception for an Autonomous Vehicle with a Multi-Camera System
Project AutoVision: Localization and 3D Scene Perception for an Autonomous Vehicle with a Multi-Camera System

Heng, L., Choi, B., Cui, Z., Geppert, M., Hu, S., Kuan, B., Liu, P., Nguyen, R. M. H., Yeo, Y. C., Geiger, A., Lee, G. H., Pollefeys, M., Sattler, T.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2019, IEEE, International Conference on Robotics and Automation, May 2019 (inproceedings)

Abstract
Project AutoVision aims to develop localization and 3D scene perception capabilities for a self-driving vehicle. Such capabilities will enable autonomous navigation in urban and rural environments, in day and night, and with cameras as the only exteroceptive sensors. The sensor suite employs many cameras for both 360-degree coverage and accurate multi-view stereo; the use of low-cost cameras keeps the cost of this sensor suite to a minimum. In addition, the project seeks to extend the operating envelope to include GNSS-less conditions which are typical for environments with tall buildings, foliage, and tunnels. Emphasis is placed on leveraging multi-view geometry and deep learning to enable the vehicle to localize and perceive in 3D space. This paper presents an overview of the project, and describes the sensor suite and current progress in the areas of calibration, localization, and perception.

avg

pdf [BibTex]

pdf [BibTex]


Self-Assembled Phage-Based Colloids for High Localized Enzymatic Activity
Self-Assembled Phage-Based Colloids for High Localized Enzymatic Activity

Alarcon-Correa, M., Guenther, J., Troll, J., Kadiri, V. M., Bill, J., Fischer, P., Rothenstein, D.

ACS Nano, 13, pages: 5810–5815, March 2019 (article)

Abstract
Catalytically active colloids are model systems for chemical motors and active matter. It is desirable to replace the inorganic catalysts and the toxic fuels that are often used, with biocompatible enzymatic reactions. However, compared to inorganic catalysts, enzyme-coated colloids tend to exhibit less activity. Here, we show that the self-assembly of genetically engineered M13 bacteriophages that bind enzymes to magnetic beads ensures high and localized enzymatic activity. These phage-decorated colloids provide a proteinaceous environment for directed enzyme immobilization. The magnetic properties of the colloidal carrier particle permit repeated enzyme recovery from a reaction solution, while the enzymatic activity is retained. Moreover, localizing the phage-based construct with a magnetic field in a microcontainer allows the enzyme-phage-colloids to function as an enzymatic micropump, where the enzymatic reaction generates a fluid flow. This system shows the fastest fluid flow reported to date by a biocompatible enzymatic micropump. In addition, it is functional in complex media including blood where the enzyme driven micropump can be powered at the physiological blood-urea concentration.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Absolute diffusion measurements of active enzyme solutions by NMR
Absolute diffusion measurements of active enzyme solutions by NMR

Guenther, J., Majer, G., Fischer, P.

J. Chem. Phys., 150(124201), March 2019 (article)

Abstract
The diffusion of enzymes is of fundamental importance for many biochemical processes. Enhanced or directed enzyme diffusion can alter the accessibility of substrates and the organization of enzymes within cells. Several studies based on fluorescence correlation spectroscopy (FCS) report enhanced diffusion of enzymes upon interaction with their substrate or inhibitor. In this context, major importance is given to the enzyme fructose-bisphosphate aldolase, for which enhanced diffusion has been reported even though the catalysed reaction is endothermic. Additionally, enhanced diffusion of tracer particles surrounding the active aldolase enzymes has been reported. These studies suggest that active enzymes can act as chemical motors that self-propel and give rise to enhanced diffusion. However, fluorescence studies of enzymes can, despite several advantages, suffer from artefacts. Here we show that the absolute diffusion coefficients of active enzyme solutions can be determined with Pulsed Field Gradient Nuclear Magnetic Resonance (PFG-NMR). The advantage of PFG-NMR is that the motion of the molecule of interest is directly observed in its native state without the need for any labelling. Further, PFG-NMR is model-free and thus yields absolute diffusion constants. Our PFG-NMR experiments of solutions containing active fructose-bisphosphate aldolase from rabbit muscle do not show any diffusion enhancement for the active enzymes nor the surrounding molecules. Additionally, we do not observe any diffusion enhancement of aldolase in the presence of its inhibitor pyrophosphate.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Chemical Nanomotors at the Gram Scale Form a Dense Active Optorheological Medium
Chemical Nanomotors at the Gram Scale Form a Dense Active Optorheological Medium

Choudhury, U., Singh, D. P., Qiu, T., Fischer, P.

Adv. Mat., 31(1807382), Febuary 2019 (article)

Abstract
The rheological properties of a colloidal suspension are a function of the concentration of the colloids and their interactions. While suspensions of passive colloids are well studied and have been shown to form crystals, gels, and glasses, examples of energy‐consuming “active” colloidal suspensions are still largely unexplored. Active suspensions of biological matter, such as motile bacteria or dense mixtures of active actin–motor–protein mixtures have, respectively, reveals superfluid‐like and gel‐like states. Attractive inanimate systems for active matter are chemically self‐propelled particles. It has so far been challenging to use these swimming particles at high enough densities to affect the bulk material properties of the suspension. Here, it is shown that light‐triggered asymmetric titanium dioxide that self‐propel, can be obtained in large quantities, and self‐organize to make a gram‐scale active medium. The suspension shows an activity‐dependent tenfold reversible change in its bulk viscosity.

pf

link (url) DOI [BibTex]


First Observation of Optical Activity in Hyper-Rayleigh Scattering
First Observation of Optical Activity in Hyper-Rayleigh Scattering

Collins, J., Rusimova, K., Hooper, D., Jeong, H. H., Ohnoutek, L., Pradaux-Caggiano, F., Verbiest, T., Carbery, D., Fischer, P., Valev, V.

Phys. Rev. X, 9(011024), January 2019 (article)

Abstract
Chiral nano- or metamaterials and surfaces enable striking photonic properties, such as negative refractive index and superchiral light, driving promising applications in novel optical components, nanorobotics, and enhanced chiral molecular interactions with light. In characterizing chirality, although nonlinear chiroptical techniques are typically much more sensitive than their linear optical counterparts, separating true chirality from anisotropy is a major challenge. Here, we report the first observation of optical activity in second-harmonic hyper-Rayleigh scattering (HRS). We demonstrate the effect in a 3D isotropic suspension of Ag nanohelices in water. The effect is 5 orders of magnitude stronger than linear optical activity and is well pronounced above the multiphoton luminescence background. Because of its sensitivity, isotropic environment, and straightforward experimental geometry, HRS optical activity constitutes a fundamental experimental breakthrough in chiral photonics for media including nanomaterials, metamaterials, and chemical molecules.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Dynamics of self-propelled colloids and their application as active matter
Dynamics of self-propelled colloids and their application as active matter

Choudhury, U.

University of Groningen, Zernike Institute for Advanced Materials, 2019 (phdthesis)

Abstract
In this thesis, the behavior of active particles spanning from single particle dynamics to collective behavior of many particles is explored. Active colloids are out-of equilibrium systems that have been studied extensively over the past 15 years. This thesis addresses several phenomena that arise in the field of active colloids.

pf

link (url) [BibTex]

link (url) [BibTex]


no image
Geometric Image Synthesis

Abu Alhaija, H., Mustikovela, S. K., Geiger, A., Rother, C.

Computer Vision – ACCV 2018, 11366, pages: 85-100, Lecture Notes in Computer Science, (Editors: Jawahar, C. and Li, H. and Mori, G. and Schindler, K. ), Asian Conference on Computer Vision, 2019 (conference)

avg

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Occupancy Networks: Learning 3D Reconstruction in Function Space
Occupancy Networks: Learning 3D Reconstruction in Function Space

Mescheder, L., Oechsle, M., Niemeyer, M., Nowozin, S., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2019, 2019 (inproceedings)

Abstract
With the advent of deep neural networks, learning-based approaches for 3D reconstruction have gained popularity. However, unlike for images, in 3D there is no canonical representation which is both computationally and memory efficient yet allows for representing high-resolution geometry of arbitrary topology. Many of the state-of-the-art learning-based 3D reconstruction approaches can hence only represent very coarse 3D geometry or are limited to a restricted domain. In this paper, we propose Occupancy Networks, a new representation for learning-based 3D reconstruction methods. Occupancy networks implicitly represent the 3D surface as the continuous decision boundary of a deep neural network classifier. In contrast to existing approaches, our representation encodes a description of the 3D output at infinite resolution without excessive memory footprint. We validate that our representation can efficiently encode 3D structure and can be inferred from various kinds of input. Our experiments demonstrate competitive results, both qualitatively and quantitatively, for the challenging tasks of 3D reconstruction from single images, noisy point clouds and coarse discrete voxel grids. We believe that occupancy networks will become a useful tool in a wide variety of learning-based 3D tasks.

avg

Code Video pdf suppmat Project Page blog [BibTex]

Code Video pdf suppmat Project Page blog [BibTex]

2017


The Numerics of GANs
The Numerics of GANs

Mescheder, L., Nowozin, S., Geiger, A.

In Proceedings from the conference "Neural Information Processing Systems 2017., (Editors: Guyon I. and Luxburg U.v. and Bengio S. and Wallach H. and Fergus R. and Vishwanathan S. and Garnett R.), Curran Associates, Inc., Advances in Neural Information Processing Systems 30 (NIPS), December 2017 (inproceedings)

Abstract
In this paper, we analyze the numerics of common algorithms for training Generative Adversarial Networks (GANs). Using the formalism of smooth two-player games we analyze the associated gradient vector field of GAN training objectives. Our findings suggest that the convergence of current algorithms suffers due to two factors: i) presence of eigenvalues of the Jacobian of the gradient vector field with zero real-part, and ii) eigenvalues with big imaginary part. Using these findings, we design a new algorithm that overcomes some of these limitations and has better convergence properties. Experimentally, we demonstrate its superiority on training common GAN architectures and show convergence on GAN architectures that are known to be notoriously hard to train.

avg

pdf Project Page [BibTex]

2017


pdf Project Page [BibTex]


Active colloidal propulsion over a crystalline surface
Active colloidal propulsion over a crystalline surface

Choudhury, U., Straube, A., Fischer, P., Gibbs, J., Höfling, F.

New Journal of Physics, 19, pages: 125010, December 2017 (article)

Abstract
We study both experimentally and theoretically the dynamics of chemically self-propelled Janus colloids moving atop a two-dimensional crystalline surface. The surface is a hexagonally close-packed monolayer of colloidal particles of the same size as the mobile one. The dynamics of the self-propelled colloid reflects the competition between hindered diffusion due to the periodic surface and enhanced diffusion due to active motion. Which contribution dominates depends on the propulsion strength, which can be systematically tuned by changing the concentration of a chemical fuel. The mean-square displacements obtained from the experiment exhibit enhanced diffusion at long lag times. Our experimental data are consistent with a Langevin model for the effectively two-dimensional translational motion of an active Brownian particle in a periodic potential, combining the confining effects of gravity and the crystalline surface with the free rotational diffusion of the colloid. Approximate analytical predictions are made for the mean-square displacement describing the crossover from free Brownian motion at short times to active diffusion at long times. The results are in semi-quantitative agreement with numerical results of a refined Langevin model that treats translational and rotational degrees of freedom on the same footing.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Wireless Acoustic-Surface Actuators for Miniaturized Endoscopes
Wireless Acoustic-Surface Actuators for Miniaturized Endoscopes

Qiu, T., Adams, F., Palagi, S., Melde, K., Mark, A. G., Wetterauer, U., Miernik, A., Fischer, P.

ACS Applied Materials & Interfaces, 9(49):42536 - 42543, November 2017 (article)

Abstract
Endoscopy enables minimally invasive procedures in many medical fields, such as urology. However, current endoscopes are normally cable-driven, which limits their dexterity and makes them hard to miniaturize. Indeed current urological endoscopes have an outer diameter of about 3 mm and still only possess one bending degree of freedom. In this paper, we report a novel wireless actuation mechanism that increases the dexterity and that permits the miniaturization of a urological endoscope. The novel actuator consists of thin active surfaces that can be readily attached to any device and are wirelessly powered by ultrasound. The surfaces consist of two-dimensional arrays of micro-bubbles, which oscillate under ultrasound excitation and thereby generate an acoustic streaming force. Bubbles of different sizes are addressed by their unique resonance frequency, thus multiple degrees of freedom can readily be incorporated. Two active miniaturized devices (with a side length of around 1 mm) are demonstrated: a miniaturized mechanical arm that realizes two degrees of freedom, and a flexible endoscope prototype equipped with a camera at the tip. With the flexible endoscope, an active endoscopic examination is successfully performed in a rabbit bladder. This results show the potential medical applicability of surface actuators wirelessly powered by ultrasound penetrating through biological tissues.

pf

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


Bounding Boxes, Segmentations and Object Coordinates: How Important is Recognition for 3D Scene Flow Estimation in Autonomous Driving Scenarios?
Bounding Boxes, Segmentations and Object Coordinates: How Important is Recognition for 3D Scene Flow Estimation in Autonomous Driving Scenarios?

Behl, A., Jafari, O. H., Mustikovela, S. K., Alhaija, H. A., Rother, C., Geiger, A.

In Proceedings IEEE International Conference on Computer Vision (ICCV), IEEE, Piscataway, NJ, USA, IEEE International Conference on Computer Vision (ICCV), October 2017 (inproceedings)

Abstract
Existing methods for 3D scene flow estimation often fail in the presence of large displacement or local ambiguities, e.g., at texture-less or reflective surfaces. However, these challenges are omnipresent in dynamic road scenes, which is the focus of this work. Our main contribution is to overcome these 3D motion estimation problems by exploiting recognition. In particular, we investigate the importance of recognition granularity, from coarse 2D bounding box estimates over 2D instance segmentations to fine-grained 3D object part predictions. We compute these cues using CNNs trained on a newly annotated dataset of stereo images and integrate them into a CRF-based model for robust 3D scene flow estimation - an approach we term Instance Scene Flow. We analyze the importance of each recognition cue in an ablation study and observe that the instance segmentation cue is by far strongest, in our setting. We demonstrate the effectiveness of our method on the challenging KITTI 2015 scene flow benchmark where we achieve state-of-the-art performance at the time of submission.

avg

pdf suppmat Poster Project Page [BibTex]

pdf suppmat Poster Project Page [BibTex]


Sparsity Invariant CNNs
Sparsity Invariant CNNs

Uhrig, J., Schneider, N., Schneider, L., Franke, U., Brox, T., Geiger, A.

International Conference on 3D Vision (3DV) 2017, International Conference on 3D Vision (3DV), October 2017 (conference)

Abstract
In this paper, we consider convolutional neural networks operating on sparse inputs with an application to depth upsampling from sparse laser scan data. First, we show that traditional convolutional networks perform poorly when applied to sparse data even when the location of missing data is provided to the network. To overcome this problem, we propose a simple yet effective sparse convolution layer which explicitly considers the location of missing data during the convolution operation. We demonstrate the benefits of the proposed network architecture in synthetic and real experiments \wrt various baseline approaches. Compared to dense baselines, the proposed sparse convolution network generalizes well to novel datasets and is invariant to the level of sparsity in the data. For our evaluation, we derive a novel dataset from the KITTI benchmark, comprising 93k depth annotated RGB images. Our dataset allows for training and evaluating depth upsampling and depth prediction techniques in challenging real-world settings.

avg

pdf suppmat Project Page Project Page [BibTex]

pdf suppmat Project Page Project Page [BibTex]


OctNetFusion: Learning Depth Fusion from Data
OctNetFusion: Learning Depth Fusion from Data

Riegler, G., Ulusoy, A. O., Bischof, H., Geiger, A.

International Conference on 3D Vision (3DV) 2017, International Conference on 3D Vision (3DV), October 2017 (conference)

Abstract
In this paper, we present a learning based approach to depth fusion, i.e., dense 3D reconstruction from multiple depth images. The most common approach to depth fusion is based on averaging truncated signed distance functions, which was originally proposed by Curless and Levoy in 1996. While this method is simple and provides great results, it is not able to reconstruct (partially) occluded surfaces and requires a large number frames to filter out sensor noise and outliers. Motivated by the availability of large 3D model repositories and recent advances in deep learning, we present a novel 3D CNN architecture that learns to predict an implicit surface representation from the input depth maps. Our learning based method significantly outperforms the traditional volumetric fusion approach in terms of noise reduction and outlier suppression. By learning the structure of real world 3D objects and scenes, our approach is further able to reconstruct occluded regions and to fill in gaps in the reconstruction. We demonstrate that our learning based approach outperforms both vanilla TSDF fusion as well as TV-L1 fusion on the task of volumetric fusion. Further, we demonstrate state-of-the-art 3D shape completion results.

avg

pdf Video 1 Video 2 Project Page Project Page [BibTex]

pdf Video 1 Video 2 Project Page Project Page [BibTex]


Active Acoustic Surfaces Enable the Propulsion of a Wireless Robot
Active Acoustic Surfaces Enable the Propulsion of a Wireless Robot

Qiu, T., Palagi, S., Mark, A. G., Melde, K., Adams, F., Fischer, P.

Advanced Materials Interfaces, 4(21):1700933, September 2017 (article)

Abstract
A major challenge that prevents the miniaturization of mechanically actuated systems is the lack of suitable methods that permit the efficient transfer of power to small scales. Acoustic energy holds great potential, as it is wireless, penetrates deep into biological tissues, and the mechanical vibrations can be directly converted into directional forces. Recently, active acoustic surfaces are developed that consist of 2D arrays of microcavities holding microbubbles that can be excited with an external acoustic field. At resonance, the surfaces give rise to acoustic streaming and thus provide a highly directional propulsive force. Here, this study advances these wireless surface actuators by studying their force output as the size of the bubble-array is increased. In particular, a general method is reported to dramatically improve the propulsive force, demonstrating that the surface actuators are actually able to propel centimeter-scale devices. To prove the flexibility of the functional surfaces as wireless ready-to-attach actuator, a mobile mini-robot capable of propulsion in water along multiple directions is presented. This work paves the way toward effectively exploiting acoustic surfaces as a novel wireless actuation scheme at small scales.

pf

link (url) DOI Project Page [BibTex]


Direct Visual Odometry for a Fisheye-Stereo Camera
Direct Visual Odometry for a Fisheye-Stereo Camera

Liu, P., Heng, L., Sattler, T., Geiger, A., Pollefeys, M.

In Proceedings IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, Piscataway, NJ, USA, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), September 2017 (inproceedings)

Abstract
We present a direct visual odometry algorithm for a fisheye-stereo camera. Our algorithm performs simultaneous camera motion estimation and semi-dense reconstruction. The pipeline consists of two threads: a tracking thread and a mapping thread. In the tracking thread, we estimate the camera pose via semi-dense direct image alignment. To have a wider field of view (FoV) which is important for robotic perception, we use fisheye images directly without converting them to conventional pinhole images which come with a limited FoV. To address the epipolar curve problem, plane-sweeping stereo is used for stereo matching and depth initialization. Multiple depth hypotheses are tracked for selected pixels to better capture the uncertainty characteristics of stereo matching. Temporal motion stereo is then used to refine the depth and remove false positive depth hypotheses. Our implementation runs at an average of 20 Hz on a low-end PC. We run experiments in outdoor environments to validate our algorithm, and discuss the experimental results. We experimentally show that we are able to estimate 6D poses with low drift, and at the same time, do semi-dense 3D reconstruction with high accuracy.

avg

pdf Project Page [BibTex]

pdf Project Page [BibTex]


Corrosion-Protected Hybrid Nanoparticles
Corrosion-Protected Hybrid Nanoparticles

Jeong, H. H., Alarcon-Correa, M., Mark, A. G., Son, K., Lee, T., Fischer, P.

Advanced Science, 4(12):1700234, September 2017 (article)

Abstract
Nanoparticles composed of functional materials hold great promise for applications due to their unique electronic, optical, magnetic, and catalytic properties. However, a number of functional materials are not only difficult to fabricate at the nanoscale, but are also chemically unstable in solution. Hence, protecting nanoparticles from corrosion is a major challenge for those applications that require stability in aqueous solutions and biological fluids. Here, this study presents a generic scheme to grow hybrid 3D nanoparticles that are completely encapsulated by a nm thick protective shell. The method consists of vacuum-based growth and protection, and combines oblique physical vapor deposition with atomic layer deposition. It provides wide flexibility in the shape and composition of the nanoparticles, and the environments against which particles are protected. The work demonstrates the approach with multifunctional nanoparticles possessing ferromagnetic, plasmonic, and chiral properties. The present scheme allows nanocolloids, which immediately corrode without protection, to remain functional, at least for a week, in acidic solutions.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Augmented Reality Meets Deep Learning for Car Instance Segmentation in Urban Scenes
Augmented Reality Meets Deep Learning for Car Instance Segmentation in Urban Scenes

Alhaija, H. A., Mustikovela, S. K., Mescheder, L., Geiger, A., Rother, C.

In Proceedings of the British Machine Vision Conference 2017, Proceedings of the British Machine Vision Conference, September 2017 (inproceedings)

Abstract
The success of deep learning in computer vision is based on the availability of large annotated datasets. To lower the need for hand labeled images, virtually rendered 3D worlds have recently gained popularity. Unfortunately, creating realistic 3D content is challenging on its own and requires significant human effort. In this work, we propose an alternative paradigm which combines real and synthetic data for learning semantic instance segmentation models. Exploiting the fact that not all aspects of the scene are equally important for this task, we propose to augment real-world imagery with virtual objects of the target category. Capturing real-world images at large scale is easy and cheap, and directly provides real background appearances without the need for creating complex 3D models of the environment. We present an efficient procedure to augment these images with virtual objects. This allows us to create realistic composite images which exhibit both realistic background appearance as well as a large number of complex object arrangements. In contrast to modeling complete 3D environments, our data augmentation approach requires only a few user interactions in combination with 3D shapes of the target object category. We demonstrate the utility of the proposed approach for training a state-of-the-art high-capacity deep model for semantic instance segmentation. In particular, we consider the task of segmenting car instances on the KITTI dataset which we have annotated with pixel-accurate ground truth. Our experiments demonstrate that models trained on augmented imagery generalize better than those trained on synthetic data or models trained on limited amounts of annotated real data.

avg

pdf Project Page [BibTex]

pdf Project Page [BibTex]


Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks
Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks

Mescheder, L., Nowozin, S., Geiger, A.

In Proceedings of the 34th International Conference on Machine Learning, 70, Proceedings of Machine Learning Research, (Editors: Doina Precup, Yee Whye Teh), PMLR, International Conference on Machine Learning (ICML), August 2017 (inproceedings)

Abstract
Variational Autoencoders (VAEs) are expressive latent variable models that can be used to learn complex probability distributions from training data. However, the quality of the resulting model crucially relies on the expressiveness of the inference model. We introduce Adversarial Variational Bayes (AVB), a technique for training Variational Autoencoders with arbitrarily expressive inference models. We achieve this by introducing an auxiliary discriminative network that allows to rephrase the maximum-likelihood-problem as a two-player game, hence establishing a principled connection between VAEs and Generative Adversarial Networks (GANs). We show that in the nonparametric limit our method yields an exact maximum-likelihood assignment for the parameters of the generative model, as well as the exact posterior distribution over the latent variables given an observation. Contrary to competing approaches which combine VAEs with GANs, our approach has a clear theoretical justification, retains most advantages of standard Variational Autoencoders and is easy to implement.

avg

pdf suppmat Project Page arxiv-version Project Page [BibTex]

pdf suppmat Project Page arxiv-version Project Page [BibTex]


Learning local feature aggregation functions with backpropagation
Learning local feature aggregation functions with backpropagation

Paschalidou, D., Katharopoulos, A., Diou, C., Delopoulos, A.

In IEEE, Signal Processing Conference (EUSIPCO), 25th European, August 2017 (inproceedings)

Abstract
This paper introduces a family of local feature aggregation functions and a novel method to estimate their parameters, such that they generate optimal representations for classification (or any task that can be expressed as a cost function minimization problem). To achieve that, we compose the local feature aggregation function with the classifier cost function and we backpropagate the gradient of this cost function in order to update the local feature aggregation function parameters. Experiments on synthetic datasets indicate that our method discovers parameters that model the class-relevant information in addition to the local feature space. Further experiments on a variety of motion and visual descriptors, both on image and video datasets, show that our method outperforms other state-of-the-art local feature aggregation functions, such as Bag of Words, Fisher Vectors and VLAD, by a large margin.

avg

pdf code poster link (url) DOI [BibTex]

pdf code poster link (url) DOI [BibTex]


Slow Flow: Exploiting High-Speed Cameras for Accurate and Diverse Optical Flow Reference Data
Slow Flow: Exploiting High-Speed Cameras for Accurate and Diverse Optical Flow Reference Data

Janai, J., Güney, F., Wulff, J., Black, M., Geiger, A.

In Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, pages: 1406-1416, IEEE, Piscataway, NJ, USA, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017 (inproceedings)

Abstract
Existing optical flow datasets are limited in size and variability due to the difficulty of capturing dense ground truth. In this paper, we tackle this problem by tracking pixels through densely sampled space-time volumes recorded with a high-speed video camera. Our model exploits the linearity of small motions and reasons about occlusions from multiple frames. Using our technique, we are able to establish accurate reference flow fields outside the laboratory in natural environments. Besides, we show how our predictions can be used to augment the input images with realistic motion blur. We demonstrate the quality of the produced flow fields on synthetic and real-world datasets. Finally, we collect a novel challenging optical flow dataset by applying our technique on data from a high-speed camera and analyze the performance of the state-of-the-art in optical flow under various levels of motion blur.

avg ps

pdf suppmat Project page Video DOI Project Page [BibTex]

pdf suppmat Project page Video DOI Project Page [BibTex]


OctNet: Learning Deep 3D Representations at High Resolutions
OctNet: Learning Deep 3D Representations at High Resolutions

Riegler, G., Ulusoy, O., Geiger, A.

In Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, IEEE, Piscataway, NJ, USA, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017 (inproceedings)

Abstract
We present OctNet, a representation for deep learning with sparse 3D data. In contrast to existing models, our representation enables 3D convolutional networks which are both deep and high resolution. Towards this goal, we exploit the sparsity in the input data to hierarchically partition the space using a set of unbalanced octrees where each leaf node stores a pooled feature representation. This allows to focus memory allocation and computation to the relevant dense regions and enables deeper networks without compromising resolution. We demonstrate the utility of our OctNet representation by analyzing the impact of resolution on several 3D tasks including 3D object classification, orientation estimation and point cloud labeling.

avg ps

pdf suppmat Project Page Video Project Page [BibTex]

pdf suppmat Project Page Video Project Page [BibTex]


A Multi-View Stereo Benchmark with High-Resolution Images and Multi-Camera Videos
A Multi-View Stereo Benchmark with High-Resolution Images and Multi-Camera Videos

Schöps, T., Schönberger, J. L., Galliani, S., Sattler, T., Schindler, K., Pollefeys, M., Geiger, A.

In Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, IEEE, Piscataway, NJ, USA, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017 (inproceedings)

Abstract
Motivated by the limitations of existing multi-view stereo benchmarks, we present a novel dataset for this task. Towards this goal, we recorded a variety of indoor and outdoor scenes using a high-precision laser scanner and captured both high-resolution DSLR imagery as well as synchronized low-resolution stereo videos with varying fields-of-view. To align the images with the laser scans, we propose a robust technique which minimizes photometric errors conditioned on the geometry. In contrast to previous datasets, our benchmark provides novel challenges and covers a diverse set of viewpoints and scene types, ranging from natural scenes to man-made indoor and outdoor environments. Furthermore, we provide data at significantly higher temporal and spatial resolution. Our benchmark is the first to cover the important use case of hand-held mobile devices while also providing high-resolution DSLR camera images. We make our datasets and an online evaluation server available at http://www.eth3d.net.

avg

pdf suppmat Project Page Project Page [BibTex]

pdf suppmat Project Page Project Page [BibTex]


Toroidal Constraints for Two Point Localization Under High Outlier Ratios
Toroidal Constraints for Two Point Localization Under High Outlier Ratios

Camposeco, F., Sattler, T., Cohen, A., Geiger, A., Pollefeys, M.

In Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, IEEE, Piscataway, NJ, USA, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017 (inproceedings)

Abstract
Localizing a query image against a 3D model at large scale is a hard problem, since 2D-3D matches become more and more ambiguous as the model size increases. This creates a need for pose estimation strategies that can handle very low inlier ratios. In this paper, we draw new insights on the geometric information available from the 2D-3D matching process. As modern descriptors are not invariant against large variations in viewpoint, we are able to find the rays in space used to triangulate a given point that are closest to a query descriptor. It is well known that two correspondences constrain the camera to lie on the surface of a torus. Adding the knowledge of direction of triangulation, we are able to approximate the position of the camera from \emphtwo matches alone. We derive a geometric solver that can compute this position in under 1 microsecond. Using this solver, we propose a simple yet powerful outlier filter which scales quadratically in the number of matches. We validate the accuracy of our solver and demonstrate the usefulness of our method in real world settings.

avg

pdf suppmat Project Page Project Page [BibTex]

pdf suppmat Project Page pdf Project Page [BibTex]