Header logo is


2018


Thumb xl yiyi paper teaser
Deep Marching Cubes: Learning Explicit Surface Representations

Liao, Y., Donne, S., Geiger, A.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2018, 2018 (inproceedings)

Abstract
Existing learning based solutions to 3D surface prediction cannot be trained end-to-end as they operate on intermediate representations (eg, TSDF) from which 3D surface meshes must be extracted in a post-processing step (eg, via the marching cubes algorithm). In this paper, we investigate the problem of end-to-end 3D surface prediction. We first demonstrate that the marching cubes algorithm is not differentiable and propose an alternative differentiable formulation which we insert as a final layer into a 3D convolutional neural network. We further propose a set of loss functions which allow for training our model with sparse point supervision. Our experiments demonstrate that the model allows for predicting sub-voxel accurate 3D shapes of arbitrary topology. Additionally, it learns to complete shapes and to separate an object's inside from its outside even in the presence of sparse and incomplete ground truth. We investigate the benefits of our approach on the task of inferring shapes from 3D point clouds. Our model is flexible and can be combined with a variety of shape encoder and shape inference techniques.

avg

pdf suppmat Video Project Page Project Page [BibTex]

2018


pdf suppmat Video Project Page Project Page [BibTex]


Thumb xl teaser andreas
Semantic Visual Localization

Schönberger, J., Pollefeys, M., Geiger, A., Sattler, T.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2018, 2018 (inproceedings)

Abstract
Robust visual localization under a wide range of viewing conditions is a fundamental problem in computer vision. Handling the difficult cases of this problem is not only very challenging but also of high practical relevance, eg, in the context of life-long localization for augmented reality or autonomous robots. In this paper, we propose a novel approach based on a joint 3D geometric and semantic understanding of the world, enabling it to succeed under conditions where previous approaches failed. Our method leverages a novel generative model for descriptor learning, trained on semantic scene completion as an auxiliary task. The resulting 3D descriptors are robust to missing observations by encoding high-level 3D geometric and semantic information. Experiments on several challenging large-scale localization datasets demonstrate reliable localization under extreme viewpoint, illumination, and geometry changes.

avg

pdf suppmat [BibTex]

pdf suppmat [BibTex]


Thumb xl hassan teaser paper
Augmented Reality Meets Computer Vision: Efficient Data Generation for Urban Driving Scenes

Alhaija, H., Mustikovela, S., Mescheder, L., Geiger, A., Rother, C.

International Journal of Computer Vision (IJCV), 2018, 2018 (article)

Abstract
The success of deep learning in computer vision is based on the availability of large annotated datasets. To lower the need for hand labeled images, virtually rendered 3D worlds have recently gained popularity. Unfortunately, creating realistic 3D content is challenging on its own and requires significant human effort. In this work, we propose an alternative paradigm which combines real and synthetic data for learning semantic instance segmentation and object detection models. Exploiting the fact that not all aspects of the scene are equally important for this task, we propose to augment real-world imagery with virtual objects of the target category. Capturing real-world images at large scale is easy and cheap, and directly provides real background appearances without the need for creating complex 3D models of the environment. We present an efficient procedure to augment these images with virtual objects. In contrast to modeling complete 3D environments, our data augmentation approach requires only a few user interactions in combination with 3D models of the target object category. Leveraging our approach, we introduce a novel dataset of augmented urban driving scenes with 360 degree images that are used as environment maps to create realistic lighting and reflections on rendered objects. We analyze the significance of realistic object placement by comparing manual placement by humans to automatic methods based on semantic scene analysis. This allows us to create composite images which exhibit both realistic background appearance as well as a large number of complex object arrangements. Through an extensive set of experiments, we conclude the right set of parameters to produce augmented data which can maximally enhance the performance of instance segmentation models. Further, we demonstrate the utility of the proposed approach on training standard deep models for semantic instance segmentation and object detection of cars in outdoor driving scenarios. We test the models trained on our augmented data on the KITTI 2015 dataset, which we have annotated with pixel-accurate ground truth, and on the Cityscapes dataset. Our experiments demonstrate that the models trained on augmented imagery generalize better than those trained on fully synthetic data or models trained on limited amounts of annotated real data.

avg

pdf [BibTex]

pdf [BibTex]


no image
Immersive Low-Cost Virtual Reality Treatment for Phantom Limb Pain: Evidence from Two Cases

Ambron, E., Miller, A., Kuchenbecker, K. J., Buxbaum, L. J., Coslett, H. B.

Frontiers in Neurology, 9(67):1-7, 2018 (article)

hi

DOI [BibTex]

DOI [BibTex]


Thumb xl person reid.001
Part-Aligned Bilinear Representations for Person Re-identification

Suh, Y., Wang, J., Tang, S., Mei, T., Lee, K. M.

arXiv preprint arXiv:1804.07094, 2018 (article)

Abstract
We propose a novel network that learns a part-aligned representation for person re-identification. It handles the body part misalignment problem, that is, body parts are misaligned across human detections due to pose/viewpoint change and unreliable detection. Our model consists of a two-stream network (one stream for appearance map extraction and the other one for body part map extraction) and a bilinear-pooling layer that generates and spatially pools a part- aligned map. Each local feature of the part-aligned map is obtained by a bilinear mapping of the corresponding local appearance and body part descriptors. Our new representation leads to a robust image matching similarity, which is equiv- alent to an aggregation of the local similarities of the corresponding body parts combined with the weighted appearance similarity. This part-aligned representa- tion reduces the part misalignment problem significantly. Our approach is also advantageous over other pose-guided representations (e.g., extracting represen- tations over the bounding box of each body part) by learning part descriptors optimal for person re-identification. For training the network, our approach does not require any part annotation on the person re-identification dataset. Instead, we simply initialize the part sub-stream using a pre-trained sub-network of an existing pose estimation network, and train the whole network to minimize the re-identification loss. We validate the effectiveness of our approach by demon- strating its superiority over the state-of-the-art methods on the standard bench- mark datasets, including Market-1501, CUHK03, CUHK01 and DukeMTMC, and standard video dataset MARS.

ps

Part-AlignedBilinearRepresentationsforPersonRe-identification link (url) [BibTex]


no image
Online optimal trajectory generation for robot table tennis

Koc, O., Maeda, G., Peters, J.

Robotics and Autonomous Systems, 105, pages: 121-137, 2018 (article)

ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl tease pic
Dissecting Adam: The Sign, Magnitude and Variance of Stochastic Gradients

Balles, L., Hennig, P.

In Proceedings of the 35th International Conference on Machine Learning (ICML), 2018 (inproceedings) Accepted

Abstract
The ADAM optimizer is exceedingly popular in the deep learning community. Often it works very well, sometimes it doesn't. Why? We interpret ADAM as a combination of two aspects: for each weight, the update direction is determined by the sign of stochastic gradients, whereas the update magnitude is determined by an estimate of their relative variance. We disentangle these two aspects and analyze them in isolation, gaining insight into the mechanisms underlying ADAM. This analysis also extends recent results on adverse effects of ADAM on generalization, isolating the sign aspect as the problematic one. Transferring the variance adaptation to SGD gives rise to a novel method, completing the practitioner's toolbox for problems where ADAM fails.

pn

link (url) [BibTex]

link (url) [BibTex]


no image
Detecting non-causal artifacts in multivariate linear regression models

Janzing, D., Schölkopf, B.

Proceedings of the 35th International Conference on Machine Learning (ICML 2018), 2018 (conference) Accepted

ei

[BibTex]

[BibTex]


no image
Blind Justice: Fairness with Encrypted Sensitive Attributes

Kilbertus, N., Gascon, A., Kusner, M., Veale, M., Gummadi, K., Weller, A.

Proceedings of the 35th International Conference on Machine Learning (ICML 2018), 2018 (conference) Accepted

ei

[BibTex]

[BibTex]


Thumb xl hp teaser
A probabilistic model for the numerical solution of initial value problems

Schober, M., Särkkä, S., Philipp Hennig,

Statistics and Computing, Springer US, 2018 (article)

Abstract
We study connections between ordinary differential equation (ODE) solvers and probabilistic regression methods in statistics. We provide a new view of probabilistic ODE solvers as active inference agents operating on stochastic differential equation models that estimate the unknown initial value problem (IVP) solution from approximate observations of the solution derivative, as provided by the ODE dynamics. Adding to this picture, we show that several multistep methods of Nordsieck form can be recast as Kalman filtering on q-times integrated Wiener processes. Doing so provides a family of IVP solvers that return a Gaussian posterior measure, rather than a point estimate. We show that some such methods have low computational overhead, nontrivial convergence order, and that the posterior has a calibrated concentration rate. Additionally, we suggest a step size adaptation algorithm which completes the proposed method to a practically useful implementation, which we experimentally evaluate using a representative set of standard codes in the DETEST benchmark set.

pn

PDF Code DOI Project Page [BibTex]


Thumb xl 2017 frvsr
Frame-Recurrent Video Super-Resolution

Sajjadi, M. S. M., Vemulapalli, R., Brown, M.

The IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2018) , 2018 (conference) Accepted

ei

ArXiv [BibTex]

ArXiv [BibTex]


no image
Autofocusing-based phase correction

Loktyushin, A., Ehses, P., Schölkopf, B., Scheffler, K.

Magnetic Resonance in Medicine, 2018, Epub ahead (article)

ei

DOI [BibTex]

DOI [BibTex]


Thumb xl screenshot from 2017 07 27 17 24 14
Playful: Reactive Programming for Orchestrating Robotic Behavior

Berenz, V., Schaal, S.

IEEE Robotics & Automation Magazine, 2018 (article) In press

am

[BibTex]

[BibTex]


no image
Temporal Difference Models: Model-Free Deep RL for Model-Based Control

Pong*, V., Gu*, S., Dalal, M., Levine, S.

6th International Conference on Learning Representations (ICLR 2018), 2018, *equal contribution (conference) Accepted

ei

link (url) [BibTex]

link (url) [BibTex]


Thumb xl screen shot 2018 03 22 at 10.40.47 am
Oncilla robot: a versatile open-source quadruped research robot with compliant pantograph legs

Spröwitz, A., Tuleu, A., Ajaoolleian, M., Vespignani, M., Moeckel, R., Eckert, P., D’Haene, M., Degrave, J., Nordmann, A., Schrauwen, B., Steil, J., Ijspeert, A. J.

arXiv:1803.06259 [cs], 2018, arXiv: 1803.06259 (unpublished) Submitted

Abstract
We present Oncilla robot, a novel mobile, quadruped legged locomotion machine. This large-cat sized, 5.1 robot is one of a kind of a recent, bioinspired legged robot class designed with the capability of model-free locomotion control. Animal legged locomotion in rough terrain is clearly shaped by sensor feedback systems. Results with Oncilla robot show that agile and versatile locomotion is possible without sensory signals to some extend, and tracking becomes robust when feedback control is added (Ajaoolleian 2015). By incorporating mechanical and control blueprints inspired from animals, and by observing the resulting robot locomotion characteristics, we aim to understand the contribution of individual components. Legged robots have a wide mechanical and control design parameter space, and a unique potential as research tools to investigate principles of biomechanics and legged locomotion control. But the hardware and controller design can be a steep initial hurdle for academic research. To facilitate the easy start and development of legged robots, Oncilla-robot's blueprints are available through open-source. [...]

dlg

link (url) [BibTex]

link (url) [BibTex]


Thumb xl david paper teaser
Learning 3D Shape Completion from Laser Scan Data with Weak Supervision

Stutz, D., Geiger, A.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2018, 2018 (inproceedings)

Abstract
3D shape completion from partial point clouds is a fundamental problem in computer vision and computer graphics. Recent approaches can be characterized as either data-driven or learning-based. Data-driven approaches rely on a shape model whose parameters are optimized to fit the observations. Learning-based approaches, in contrast, avoid the expensive optimization step and instead directly predict the complete shape from the incomplete observations using deep neural networks. However, full supervision is required which is often not available in practice. In this work, we propose a weakly-supervised learning-based approach to 3D shape completion which neither requires slow optimization nor direct supervision. While we also learn a shape prior on synthetic data, we amortize, ie, learn, maximum likelihood fitting using deep neural networks resulting in efficient shape completion without sacrificing accuracy. Tackling 3D shape completion of cars on ShapeNet and KITTI, we demonstrate that the proposed amortized maximum likelihood approach is able to compete with a fully supervised baseline and a state-of-the-art data-driven approach while being significantly faster. On ModelNet, we additionally show that the approach is able to generalize to other object categories as well.

avg

pdf suppmat Project Page Project Page [BibTex]

pdf suppmat Project Page Project Page [BibTex]


no image
Feedback Control Goes Wireless: Guaranteed Stability over Low-power Multi-hop Networks

Mager, F., Baumann, D., Jacob, R., Thiele, L., Trimpe, S., Zimmerling, M.

2018, Under review at a conference (conference) Submitted

Abstract
Closing feedback loops fast and over long distances is key to emerging applications; for example, robot motion control and swarm coordination require update intervals below 100 ms. Low-power wireless is preferred for its flexibility, low cost, and small form factor, especially if the devices support multi-hop communication. Thus far, however, closed-loop control over multi-hop low-power wireless has only been demonstrated for update intervals on the order of multiple seconds. This paper presents a wireless embedded system that tames imperfections impairing control performance such as jitter or packet loss, and a control design that exploits the essential properties of this system to provably guarantee closed-loop stability for linear dynamic systems. Using experiments on a testbed with multiple cart-pole systems, we are the first to demonstrate the feasibility and to assess the performance of closed-loop control and coordination over multi-hop low-power wireless for update intervals from 20 ms to 50 ms.

am ics

arXiv PDF [BibTex]

arXiv PDF [BibTex]


Thumb xl 2018 tgan
Tempered Adversarial Networks

Sajjadi, M. S. M., Schölkopf, B.

Proceedings of the 35th International Conference on Machine Learning (ICML 2018), 2018 (conference) Accepted

ei

ArXiv [BibTex]

ArXiv [BibTex]


no image
PET/MRI Hybrid Systems

Mannheim, G. J., Schmid, A. M., Schwenck, J., Katiyar, P., Herfert, K., Pichler, B. J., Disselhorst, J. A.

Seminars in Nuclear Medicine, 2018 (article) In press

ei

DOI [BibTex]

DOI [BibTex]


no image
PIPPS: Flexible Model-Based Policy Search Robust to the Curse of Chaos

Parmas, P., Doya, K., Rasmussen, C., Peters, J.

Proceedings of the 35th International Conference on Machine Learning (ICML 2018), 2018 (conference) Accepted

ei

[BibTex]

[BibTex]


no image
Learning Independent Causal Mechanisms

Parascandolo, G., Kilbertus, N., Rojas-Carulla, M., Schölkopf, B.

Proceedings of the 35th International Conference on Machine Learning (ICML 2018), 2018 (conference) Accepted

ei

[BibTex]

[BibTex]


no image
Leveraging the Crowd to Detect and Reduce the Spread of Fake News and Misinformation

Kim, J., Tabibian, B., Oh, A., Schölkopf, B., Gomez Rodriguez, M.

Proceedings of the 11th ACM International Conference on Web Search and Data Mining (WSDM 2018), 2018 (conference) Accepted

ei

[BibTex]

[BibTex]


no image
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning

Eysenbach, B., Gu, S., Ibarz, J., Levine, S.

6th International Conference on Learning Representations (ICLR 2018), 2018 (conference) Accepted

ei

Videos link (url) [BibTex]

Videos link (url) [BibTex]


Thumb xl yanzhang clustering
Temporal Human Action Segmentation via Dynamic Clustering

Zhang, Y., Sun, H., Tang, S., Neumann, H.

arXiv preprint arXiv:1803.05790, 2018 (article)

Abstract
We present an effective dynamic clustering algorithm for the task of temporal human action segmentation, which has comprehensive applications such as robotics, motion analysis, and patient monitoring. Our proposed algorithm is unsupervised, fast, generic to process various types of features, and applica- ble in both the online and offline settings. We perform extensive experiments of processing data streams, and show that our algorithm achieves the state-of- the-art results for both online and offline settings.

ps

link (url) [BibTex]

link (url) [BibTex]


no image
Generalized Score Functions for Causal Discovery

Huang, B., Zhang, K., Lin, Y., Schölkopf, B., C., G.

Proceedings of the 24th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2018), 2018 (conference) Accepted

ei

[BibTex]

[BibTex]


no image
Prediction of Glucose Tolerance without an Oral Glucose Tolerance Test

Babbar, R., Heni, M., Peter, A., Hrabě de Angelis, M., Häring, H., Fritsche, A., Preissl, H., Schölkopf, B., Wagner, R.

Frontiers in Endocrinology, 9, pages: 82, 2018 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Memristor-enhanced humanoid robot control system–Part II: circuit theoretic model and performance analysis

Baumann, D., Ascoli, A., Tetzlaff, R., Chua, L. O., Hild, M.

International Journal of Circuit Theory and Applications, 46(1):184-220, 2018 (article)

am

DOI [BibTex]

DOI [BibTex]


Thumb xl benvisapp
Learning Transformation Invariant Representations with Weak Supervision

Coors, B., Condurache, A., Mertins, A., Geiger, A.

In International Conference on Computer Vision Theory and Applications, International Conference on Computer Vision Theory and Applications, 2018 (inproceedings)

Abstract
Deep convolutional neural networks are the current state-of-the-art solution to many computer vision tasks. However, their ability to handle large global and local image transformations is limited. Consequently, extensive data augmentation is often utilized to incorporate prior knowledge about desired invariances to geometric transformations such as rotations or scale changes. In this work, we combine data augmentation with an unsupervised loss which enforces similarity between the predictions of augmented copies of an input sample. Our loss acts as an effective regularizer which facilitates the learning of transformation invariant representations. We investigate the effectiveness of the proposed similarity loss on rotated MNIST and the German Traffic Sign Recognition Benchmark (GTSRB) in the context of different classification models including ladder networks. Our experiments demonstrate improvements with respect to the standard data augmentation approach for supervised and semi-supervised learning tasks, in particular in the presence of little annotated data. In addition, we analyze the performance of the proposed approach with respect to its hyperparameters, including the strength of the regularization as well as the layer where representation similarity is enforced.

avg

pdf [BibTex]

pdf [BibTex]


no image
Cause-Effect Inference by Comparing Regression Errors

Blöbaum, P., Janzing, D., Washio, T., Shimizu, S., Schölkopf, B.

Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS 2018) , 84, pages: 900-909, Proceedings of Machine Learning Research, (Editors: Amos Storkey and Fernando Perez-Cruz), PMLR, 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Thumb xl screen shot 2018 02 03 at 9.09.06 am
Shaping in Practice: Training Wheels to Learn Fast Hopping Directly in Hardware

Heim, S., Ruppert, F., Sarvestani, A., Spröwitz, A.

ICRA2018, 2018 (unpublished) Accepted

Abstract
Learning instead of designing robot controllers can greatly reduce engineering effort required, while also emphasizing robustness. Despite considerable progress in simulation, applying learning directly in hardware is still challenging, in part due to the necessity to explore potentially unstable parameters. We explore the of concept shaping the reward landscape with training wheels; temporary modifications of the physical hardware that facilitate learning. We demonstrate the concept with a robot leg mounted on a boom learning to hop fast. This proof of concept embodies typical challenges such as instability and contact, while being simple enough to empirically map out and visualize the reward landscape. Based on our results we propose three criteria for designing effective training wheels for learning in robotics.

dlg

Video Youtube link (url) [BibTex]


no image
Automatic Estimation of Modulation Transfer Functions

Bauer, M., Volchkov, V., Hirsch, M., Schölkopf, B.

International Conference on Computational Photography (ICCP 2018), 2018 (conference) Accepted

ei sf

Project Page [BibTex]

Project Page [BibTex]


Thumb xl smalrteaser
Lions and Tigers and Bears: Capturing Non-Rigid, 3D, Articulated Shape from Images

Zuffi, S., Kanazawa, A., Black, M. J.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2018, 2018 (inproceedings)

Abstract
Animals are widespread in nature and the analysis of their shape and motion is important in many fields and industries. Modeling 3D animal shape, however, is difficult because the 3D scanning methods used to capture human shape are not applicable to wild animals or natural settings. Consequently, we propose a method to capture the detailed 3D shape of animals from images alone. The articulated and deformable nature of animals makes this problem extremely challenging, particularly in unconstrained environments with moving and uncalibrated cameras. To make this possible, we use a strong prior model of articulated animal shape that we fit to the image data. We then deform the animal shape in a canonical reference pose such that it matches image evidence when articulated and projected into multiple images. Our method extracts significantly more 3D shape detail than previous methods and is able to model new species, including the shape of an extinct animal, using only a few video frames. Additionally, the projected 3D shapes are accurate enough to facilitate the extraction of a realistic texture map from multiple frames.

ps

pdf [BibTex]

pdf [BibTex]


Thumb xl screen shot 2018 04 18 at 11.01.27 am
Learning from Outside the Viability Kernel: Why we Should Build Robots that can Fail with Grace

Heim, S., Spröwitz, A.

Accepted for SIMPAR 2018, 2018 (conference) Accepted

dlg

[BibTex]

[BibTex]


no image
Differentially Private Database Release via Kernel Mean Embeddings

Balog, M., Tolstikhin, I., Schölkopf, B.

Proceedings of the 35th International Conference on Machine Learning (ICML 2018), 2018 (conference) Accepted

ei

[BibTex]

[BibTex]


Thumb xl selection 002
PoTion: Pose MoTion Representation for Action Recognition

Choutas, V., Weinzaepfel, P., Revaud, J., Schmid, C.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2018, 2018 (inproceedings)

Abstract
Most state-of-the-art methods for action recognition rely on a two-stream architecture that processes appearance and motion independently. In this paper, we claim that consider- ing them jointly offers rich information for action recogni- tion. We introduce a novel representation that gracefully en- codes the movement of some semantic keypoints. We use the human joints as these keypoints and term our Pose moTion representation PoTion. Specifically, we first run a state- of-the-art human pose estimator [4] and extract heatmaps for the human joints in each frame. We obtain our PoTion representation by temporally aggregating these probability maps. This is achieved by ‘colorizing’ each of them de- pending on the relative time of the frames in the video clip and summing them. This fixed-size representation for an en- tire video clip is suitable to classify actions using a shallow convolutional neural network. Our experimental evaluation shows that PoTion outper- forms other state-of-the-art pose representations [6, 48]. Furthermore, it is complementary to standard appearance and motion streams. When combining PoTion with the recent two-stream I3D approach [5], we obtain state-of- the-art performance on the JHMDB, HMDB and UCF101 datasets.

ps

PDF [BibTex]

PDF [BibTex]


no image
Revisiting First-Order Convex Optimization Over Linear Spaces

Locatello, F., Raj, A., Praneeth Karimireddy, S., Rätsch, G., Schölkopf, B., Stich, S. U., Jaggi, M.

Proceedings of the 35th International Conference on Machine Learning (ICML 2018), 2018 (conference) Accepted

ei

[BibTex]

[BibTex]


Thumb xl screen shot 2018 05 04 at 11.47.54
Tactile perception by electrovibration

Vardar, Y.

Koc University, 2018 (phdthesis)

Abstract
One approach to generating realistic haptic feedback on touch screens is electrovibration. In this technique, the friction force is altered via electrostatic forces, which are generated by applying an alternating voltage signal to the conductive layer of a capacitive touchscreen. Although the technology for rendering haptic effects on touch surfaces using electrovibration is already in place, our knowledge of the perception mechanisms behind these effects is limited. This thesis aims to explore the mechanisms underlying haptic perception of electrovibration in two parts. In the first part, the effect of input signal properties on electrovibration perception is investigated. Our findings indicate that the perception of electrovibration stimuli depends on frequency-dependent electrical properties of human skin and human tactile sensitivity. When a voltage signal is applied to a touchscreen, it is filtered electrically by human finger and it generates electrostatic forces in the skin and mechanoreceptors. Depending on the spectral energy content of this electrostatic force signal, different psychophysical channels may be activated. The channel which mediates the detection is determined by the frequency component which has a higher energy than the sensory threshold at that frequency. In the second part, effect of masking on the electrovibration perception is investigated. We show that the detection thresholds are elevated as linear functions of masking levels for simultaneous and pedestal masking. The masking effectiveness is larger for pedestal masking compared to simultaneous masking. Moreover, our results suggest that sharpness perception depends on the local contrast between background and foreground stimuli, which varies as a function of masking amplitude and activation levels of frequency-dependent psychophysical channels.

hi

[BibTex]

[BibTex]


no image
PoTion: Pose MoTion Representation for Action Recognition

Choutas, Vasileios, Weinzaepfel, Philippe, Revaud, Jérôme, Schmid, Cordelia

In CVPR 2018 - IEEE Conference on Computer Vision and Pattern Recognition, pages: 1-10, IEEE, Salt Lake City, United States, June 2018 (inproceedings)

link (url) [BibTex]

link (url) [BibTex]

2017


no image
Interpolated Policy Gradient: Merging On-Policy and Off-Policy Gradient Estimation for Deep Reinforcement Learning

Gu, S., Lillicrap, T., Turner, R. E., Ghahramani, Z., Schölkopf, B., Levine, S.

Proceedings from the conference "Neural Information Processing Systems 2017., pages: 3849-3858, (Editors: Guyon I. and Luxburg U.v. and Bengio S. and Wallach H. and Fergus R. and Vishwanathan S. and Garnett R.), Curran Associates, Inc., Advances in Neural Information Processing Systems 30 (NIPS), December 2017 (conference)

ei

link (url) [BibTex]

2017


link (url) [BibTex]


no image
Boosting Variational Inference: an Optimization Perspective

Locatello, F., Khanna, R., Ghosh, J., Rätsch, G.

Proceedings from the conference "Neural Information Processing Systems 2017., (Editors: Guyon I. and Luxburg U.v. and Bengio S. and Wallach H. and Fergus R. and Vishwanathan S. and Garnett R.), Curran Associates, Inc., Advances in Neural Information Processing Systems 30 (NIPS), December 2017 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Learning Independent Causal Mechanisms

Parascandolo, G., Rojas-Carulla, M., Kilbertus, N., Schölkopf, B.

Proceedings from the conference "Neural Information Processing Systems 2017., (Editors: Guyon I. and Luxburg U.v. and Bengio S. and Wallach H. and Fergus R. and Vishwanathan S. and Garnett R.), Curran Associates, Inc., Advances in Neural Information Processing Systems 30 (NIPS), December 2017 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Avoiding Discrimination through Causal Reasoning

Kilbertus, N., Rojas-Carulla, M., Parascandolo, G., Hardt, M., Janzing, D., Schölkopf, B.

Proceedings from the conference "Neural Information Processing Systems 2017., pages: 656-666, (Editors: Guyon I. and Luxburg U.v. and Bengio S. and Wallach H. and Fergus R. and Vishwanathan S. and Garnett R.), Curran Associates, Inc., Advances in Neural Information Processing Systems 30 (NIPS), December 2017 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Greedy Algorithms for Cone Constrained Optimization with Convergence Guarantees

Locatello, F., Tschannen, M., Rätsch, G., Jaggi, M.

Proceedings from the conference "Neural Information Processing Systems 2017., pages: 773-784, (Editors: Guyon I. and Luxburg U.v. and Bengio S. and Wallach H. and Fergus R. and Vishwanathan S. and Garnett R.), Curran Associates, Inc., Advances in Neural Information Processing Systems 30 (NIPS), December 2017 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
AdaGAN: Boosting Generative Models

Tolstikhin, I., Gelly, S., Bousquet, O., Simon-Gabriel, C. J., Schölkopf, B.

Proceedings from the conference "Neural Information Processing Systems 2017., pages: 5430-5439, (Editors: Guyon I. and Luxburg U.v. and Bengio S. and Wallach H. and Fergus R. and Vishwanathan S. and Garnett R.), Curran Associates, Inc., Advances in Neural Information Processing Systems 30 (NIPS), December 2017 (conference)

ei

arXiv link (url) [BibTex]

arXiv link (url) [BibTex]


Thumb xl larsnips
The Numerics of GANs

Mescheder, L., Nowozin, S., Geiger, A.

In Proceedings from the conference "Neural Information Processing Systems 2017., (Editors: Guyon I. and Luxburg U.v. and Bengio S. and Wallach H. and Fergus R. and Vishwanathan S. and Garnett R.), Curran Associates, Inc., Advances in Neural Information Processing Systems 30 (NIPS), December 2017 (inproceedings)

Abstract
In this paper, we analyze the numerics of common algorithms for training Generative Adversarial Networks (GANs). Using the formalism of smooth two-player games we analyze the associated gradient vector field of GAN training objectives. Our findings suggest that the convergence of current algorithms suffers due to two factors: i) presence of eigenvalues of the Jacobian of the gradient vector field with zero real-part, and ii) eigenvalues with big imaginary part. Using these findings, we design a new algorithm that overcomes some of these limitations and has better convergence properties. Experimentally, we demonstrate its superiority on training common GAN architectures and show convergence on GAN architectures that are known to be notoriously hard to train.

avg

pdf [BibTex]

pdf [BibTex]


no image
Safe Adaptive Importance Sampling

Stich, S. U., Raj, A., Jaggi, M.

Proceedings from the conference "Neural Information Processing Systems 2017., pages: 4384-4394, (Editors: Guyon I. and Luxburg U.v. and Bengio S. and Wallach H. and Fergus R. and Vishwanathan S. and Garnett R.), Curran Associates, Inc., Advances in Neural Information Processing Systems 30 (NIPS), December 2017 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Thumb xl amd intentiongan
Multi-Modal Imitation Learning from Unstructured Demonstrations using Generative Adversarial Nets

Hausman, K., Chebotar, Y., Schaal, S., Sukhatme, G., Lim, J.

In Proceedings from the conference "Neural Information Processing Systems 2017., (Editors: Guyon I. and Luxburg U.v. and Bengio S. and Wallach H. and Fergus R. and Vishwanathan S. and Garnett R.), Curran Associates, Inc., Advances in Neural Information Processing Systems 30 (NIPS), December 2017 (inproceedings)

am

pdf video [BibTex]

pdf video [BibTex]