Header logo is


2017


Thumb xl flamewebteaserwide
Learning a model of facial shape and expression from 4D scans

Li, T., Bolkart, T., Black, M. J., Li, H., Romero, J.

ACM Transactions on Graphics, 36(6):194:1-194:17, November 2017, Two first authors contributed equally (article)

Abstract
The field of 3D face modeling has a large gap between high-end and low-end methods. At the high end, the best facial animation is indistinguishable from real humans, but this comes at the cost of extensive manual labor. At the low end, face capture from consumer depth sensors relies on 3D face models that are not expressive enough to capture the variability in natural facial shape and expression. We seek a middle ground by learning a facial model from thousands of accurately aligned 3D scans. Our FLAME model (Faces Learned with an Articulated Model and Expressions) is designed to work with existing graphics software and be easy to fit to data. FLAME uses a linear shape space trained from 3800 scans of human heads. FLAME combines this linear shape space with an articulated jaw, neck, and eyeballs, pose-dependent corrective blendshapes, and additional global expression from 4D face sequences in the D3DFACS dataset along with additional 4D sequences.We accurately register a template mesh to the scan sequences and make the D3DFACS registrations available for research purposes. In total the model is trained from over 33, 000 scans. FLAME is low-dimensional but more expressive than the FaceWarehouse model and the Basel Face Model. We compare FLAME to these models by fitting them to static 3D scans and 4D sequences using the same optimization method. FLAME is significantly more accurate and is available for research purposes (http://flame.is.tue.mpg.de).

ps

data/model video code chumpy code tensorflow paper supplemental Project Page [BibTex]

2017


data/model video code chumpy code tensorflow paper supplemental Project Page [BibTex]


Thumb xl molbert
Investigating Body Image Disturbance in Anorexia Nervosa Using Novel Biometric Figure Rating Scales: A Pilot Study

Mölbert, S. C., Thaler, A., Streuber, S., Black, M. J., Karnath, H., Zipfel, S., Mohler, B., Giel, K. E.

European Eating Disorders Review, 25(6):607-612, November 2017 (article)

Abstract
This study uses novel biometric figure rating scales (FRS) spanning body mass index (BMI) 13.8 to 32.2 kg/m2 and BMI 18 to 42 kg/m2. The aims of the study were (i) to compare FRS body weight dissatisfaction and perceptual distortion of women with anorexia nervosa (AN) to a community sample; (ii) how FRS parameters are associated with questionnaire body dissatisfaction, eating disorder symptoms and appearance comparison habits; and (iii) whether the weight spectrum of the FRS matters. Women with AN (n = 24) and a community sample of women (n = 104) selected their current and ideal body on the FRS and completed additional questionnaires. Women with AN accurately picked the body that aligned best with their actual weight in both FRS. Controls underestimated their BMI in the FRS 14–32 and were accurate in the FRS 18–42. In both FRS, women with AN desired a body close to their actual BMI and controls desired a thinner body. Our observations suggest that body image disturbance in AN is unlikely to be characterized by a visual perceptual disturbance, but rather by an idealization of underweight in conjunction with high body dissatisfaction. The weight spectrum of FRS can influence the accuracy of BMI estimation.

ps

publisher DOI Project Page [BibTex]


Thumb xl manoteaser
Embodied Hands: Modeling and Capturing Hands and Bodies Together

Romero, J., Tzionas, D., Black, M. J.

ACM Transactions on Graphics, (Proc. SIGGRAPH Asia), 36(6):245:1-245:17, 245:1–245:17, ACM, November 2017 (article)

Abstract
Humans move their hands and bodies together to communicate and solve tasks. Capturing and replicating such coordinated activity is critical for virtual characters that behave realistically. Surprisingly, most methods treat the 3D modeling and tracking of bodies and hands separately. Here we formulate a model of hands and bodies interacting together and fit it to full-body 4D sequences. When scanning or capturing the full body in 3D, hands are small and often partially occluded, making their shape and pose hard to recover. To cope with low-resolution, occlusion, and noise, we develop a new model called MANO (hand Model with Articulated and Non-rigid defOrmations). MANO is learned from around 1000 high-resolution 3D scans of hands of 31 subjects in a wide variety of hand poses. The model is realistic, low-dimensional, captures non-rigid shape changes with pose, is compatible with standard graphics packages, and can fit any human hand. MANO provides a compact mapping from hand poses to pose blend shape corrections and a linear manifold of pose synergies. We attach MANO to a standard parameterized 3D body shape model (SMPL), resulting in a fully articulated body and hand model (SMPL+H). We illustrate SMPL+H by fitting complex, natural, activities of subjects captured with a 4D scanner. The fitting is fully automatic and results in full body models that move naturally with detailed hand motions and a realism not seen before in full body performance capture. The models and data are freely available for research purposes at http://mano.is.tue.mpg.de.

ps

website youtube paper suppl video link (url) DOI Project Page [BibTex]

website youtube paper suppl video link (url) DOI Project Page [BibTex]


Thumb xl cover tro paper
An Online Scalable Approach to Unified Multirobot Cooperative Localization and Object Tracking

Ahmad, A., Lawless, G., Lima, P.

IEEE Transactions on Robotics (T-RO), 33, pages: 1184 - 1199, October 2017 (article)

Abstract
In this article we present a unified approach for multi-robot cooperative simultaneous localization and object tracking based on particle filters. Our approach is scalable with respect to the number of robots in the team. We introduce a method that reduces, from an exponential to a linear growth, the space and computation time requirements with respect to the number of robots in order to maintain a given level of accuracy in the full state estimation. Our method requires no increase in the number of particles with respect to the number of robots. However, in our method each particle represents a full state hypothesis, leading to the linear dependency on the number of robots of both space and time complexity. The derivation of the algorithm implementing our approach from a standard particle filter algorithm and its complexity analysis are presented. Through an extensive set of simulation experiments on a large number of randomized datasets, we demonstrate the correctness and efficacy of our approach. Through real robot experiments on a standardized open dataset of a team of four soccer playing robots tracking a ball, we evaluate our method's estimation accuracy with respect to the ground truth values. Through comparisons with other methods based on i) nonlinear least squares minimization and ii) joint extended Kalman filter, we further highlight our method's advantages. Finally, we also present a robustness test for our approach by evaluating it under scenarios of communication and vision failure in teammate robots.

ps

Published Version link (url) DOI [BibTex]

Published Version link (url) DOI [BibTex]


no image
Generalized exploration in policy search

van Hoof, H., Tanneberg, D., Peters, J.

Machine Learning, 106(9-10):1705-1724 , (Editors: Kurt Driessens, Dragi Kocev, Marko Robnik‐Sikonja, and Myra Spiliopoulou), October 2017, Special Issue of the ECML PKDD 2017 Journal Track (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Probabilistic Prioritization of Movement Primitives

Paraschos, A., Lioutikov, R., Peters, J., Neumann, G.

Proceedings of the International Conference on Intelligent Robot Systems, and IEEE Robotics and Automation Letters (RA-L), 2(4):2294-2301, October 2017 (article)

ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Learning Movement Primitive Libraries through Probabilistic Segmentation

Lioutikov, R., Neumann, G., Maeda, G., Peters, J.

International Journal of Robotics Research, 36(8):879-894, July 2017 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Guiding Trajectory Optimization by Demonstrated Distributions

Osa, T., Ghalamzan E., A. M., Stolkin, R., Lioutikov, R., Peters, J., Neumann, G.

IEEE Robotics and Automation Letters, 2(2):819-826, April 2017 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Whole-body multi-contact motion in humans and humanoids: Advances of the CoDyCo European project

Padois, V., Ivaldi, S., Babic, J., Mistry, M., Peters, J., Nori, F.

Robotics and Autonomous Systems, 90, pages: 97-117, April 2017, Special Issue on New Research Frontiers for Intelligent Autonomous Systems (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Probabilistic Movement Primitives for Coordination of Multiple Human-Robot Collaborative Tasks

Maeda, G., Neumann, G., Ewerton, M., Lioutikov, R., Kroemer, O., Peters, J.

Autonomous Robots, 41(3):593-612, March 2017 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Bioinspired tactile sensor for surface roughness discrimination

Yi, Z., Zhang, Y., Peters, J.

Sensors and Actuators A: Physical, 255, pages: 46-53, March 2017 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Model-based Contextual Policy Search for Data-Efficient Generalization of Robot Skills

Kupcsik, A., Deisenroth, M., Peters, J., Ai Poh, L., Vadakkepat, V., Neumann, G.

Artificial Intelligence, 247, pages: 415-439, 2017, Special Issue on AI and Robotics (article)

ei

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


no image
Anticipatory Action Selection for Human-Robot Table Tennis

Wang, Z., Boularias, A., Mülling, K., Schölkopf, B., Peters, J.

Artificial Intelligence, 247, pages: 399-414, 2017, Special Issue on AI and Robotics (article)

Abstract
Abstract Anticipation can enhance the capability of a robot in its interaction with humans, where the robot predicts the humans' intention for selecting its own action. We present a novel framework of anticipatory action selection for human-robot interaction, which is capable to handle nonlinear and stochastic human behaviors such as table tennis strokes and allows the robot to choose the optimal action based on prediction of the human partner's intention with uncertainty. The presented framework is generic and can be used in many human-robot interaction scenarios, for example, in navigation and human-robot co-manipulation. In this article, we conduct a case study on human-robot table tennis. Due to the limited amount of time for executing hitting movements, a robot usually needs to initiate its hitting movement before the opponent hits the ball, which requires the robot to be anticipatory based on visual observation of the opponent's movement. Previous work on Intention-Driven Dynamics Models (IDDM) allowed the robot to predict the intended target of the opponent. In this article, we address the problem of action selection and optimal timing for initiating a chosen action by formulating the anticipatory action selection as a Partially Observable Markov Decision Process (POMDP), where the transition and observation are modeled by the \{IDDM\} framework. We present two approaches to anticipatory action selection based on the \{POMDP\} formulation, i.e., a model-free policy learning method based on Least-Squares Policy Iteration (LSPI) that employs the \{IDDM\} for belief updates, and a model-based Monte-Carlo Planning (MCP) method, which benefits from the transition and observation model by the IDDM. Experimental results using real data in a simulated environment show the importance of anticipatory action selection, and that \{POMDPs\} are suitable to formulate the anticipatory action selection problem by taking into account the uncertainties in prediction. We also show that existing algorithms for POMDPs, such as \{LSPI\} and MCP, can be applied to substantially improve the robot's performance in its interaction with humans.

am ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
easyGWAS: A Cloud-based Platform for Comparing the Results of Genome-wide Association Studies

Grimm, D., Roqueiro, D., Salome, P., Kleeberger, S., Greshake, B., Zhu, W., Liu, C., Lippert, C., Stegle, O., Schölkopf, B., Weigel, D., Borgwardt, K.

The Plant Cell, 29(1):5-19, 2017 (article)

ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
A Novel Unsupervised Segmentation Approach Quantifies Tumor Tissue Populations Using Multiparametric MRI: First Results with Histological Validation

Katiyar, P., Divine, M. R., Kohlhofer, U., Quintanilla-Martinez, L., Schölkopf, B., Pichler, B. J., Disselhorst, J. A.

Molecular Imaging and Biology, 19(3):391-397, 2017 (article)

ei

DOI [BibTex]

DOI [BibTex]


Thumb xl early stopping teaser
Early Stopping Without a Validation Set

Mahsereci, M., Balles, L., Lassner, C., Hennig, P.

arXiv preprint arXiv:1703.09580, 2017 (article)

Abstract
Early stopping is a widely used technique to prevent poor generalization performance when training an over-expressive model by means of gradient-based optimization. To find a good point to halt the optimizer, a common practice is to split the dataset into a training and a smaller validation set to obtain an ongoing estimate of the generalization performance. In this paper we propose a novel early stopping criterion which is based on fast-to-compute, local statistics of the computed gradients and entirely removes the need for a held-out validation set. Our experiments show that this is a viable approach in the setting of least-squares and logistic regression as well as neural networks.

ps pn

link (url) Project Page Project Page [BibTex]


no image
Minimax Estimation of Kernel Mean Embeddings

Tolstikhin, I., Sriperumbudur, B., Muandet, K.

Journal of Machine Learning Research, 18(86):1-47, 2017 (article)

ei

link (url) Project Page [BibTex]


no image
Kernel Mean Embedding of Distributions: A Review and Beyond

Muandet, K., Fukumizu, K., Sriperumbudur, B., Schölkopf, B.

Foundations and Trends in Machine Learning, 10(1-2):1-141, 2017 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Prediction of intention during interaction with iCub with Probabilistic Movement Primitives

Dermy, O., Paraschos, A., Ewerton, M., Charpillet, F., Peters, J., Ivaldi, S.

Frontiers in Robotics and AI, 4, pages: 45, 2017 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Manifold-based multi-objective policy search with sample reuse

Parisi, S., Pirotta, M., Peters, J.

Neurocomputing, 263, pages: 3-14, (Editors: Madalina Drugan, Marco Wiering, Peter Vamplew, and Madhu Chetty), 2017, Special Issue on Multi-Objective Reinforcement Learning (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Spectral Clustering predicts tumor tissue heterogeneity using dynamic 18F-FDG PET: a complement to the standard compartmental modeling approach

Katiyar, P., Divine, M. R., Kohlhofer, U., Quintanilla-Martinez, L., Schölkopf, B., Pichler, B. J., Disselhorst, J. A.

Journal of Nuclear Medicine, 58(4):651-657, 2017 (article)

ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Electroencephalographic identifiers of motor adaptation learning

Ozdenizci, O., Yalcin, M., Erdogan, A., Patoglu, V., Grosse-Wentrup, M., Cetin, M.

Journal of Neural Engineering, 14(4):046027, 2017 (article)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Detecting distortions of peripherally presented letter stimuli under crowded conditions

Wallis, T. S. A., Tobias, S., Bethge, M., Wichmann, F. A.

Attention, Perception, & Psychophysics, 79(3):850-862, 2017 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Temporal evolution of the central fixation bias in scene viewing

Rothkegel, L. O. M., Trukenbrod, H. A., Schütt, H. H., Wichmann, F. A., Engbert, R.

Journal of Vision, 17(13):3, 2017 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
BundleMAP: Anatomically Localized Classification, Regression, and Hypothesis Testing in Diffusion MRI

Khatami, M., Schmidt-Wilcke, T., Sundgren, P. C., Abbasloo, A., Schölkopf, B., Schultz, T.

Pattern Recognition, 63, pages: 593-600, 2017 (article)

ei

DOI [BibTex]

DOI [BibTex]


Thumb xl web image
Data-Driven Physics for Human Soft Tissue Animation

Kim, M., Pons-Moll, G., Pujades, S., Bang, S., Kim, J., Black, M. J., Lee, S.

ACM Transactions on Graphics, (Proc. SIGGRAPH), 36(4):54:1-54:12, 2017 (article)

Abstract
Data driven models of human poses and soft-tissue deformations can produce very realistic results, but they only model the visible surface of the human body and cannot create skin deformation due to interactions with the environment. Physical simulations can generalize to external forces, but their parameters are difficult to control. In this paper, we present a layered volumetric human body model learned from data. Our model is composed of a data-driven inner layer and a physics-based external layer. The inner layer is driven with a volumetric statistical body model (VSMPL). The soft tissue layer consists of a tetrahedral mesh that is driven using the finite element method (FEM). Model parameters, namely the segmentation of the body into layers and the soft tissue elasticity, are learned directly from 4D registrations of humans exhibiting soft tissue deformations. The learned two layer model is a realistic full-body avatar that generalizes to novel motions and external forces. Experiments show that the resulting avatars produce realistic results on held out sequences and react to external forces. Moreover, the model supports the retargeting of physical properties from one avatar when they share the same topology.

ps

video paper link (url) Project Page [BibTex]

video paper link (url) Project Page [BibTex]


no image
A parametric texture model based on deep convolutional features closely matches texture appearance for humans

Wallis, T. S. A., Funke, C. M., Ecker, A. S., Gatys, L. A., Wichmann, F. A., Bethge, M.

Journal of Vision, 17(12), 2017 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Model Selection for Gaussian Mixture Models

Huang, T., Peng, H., Zhang, K.

Statistica Sinica, 27(1):147-169, 2017 (article)

ei

link (url) [BibTex]

link (url) [BibTex]


Thumb xl web teaser eg
Sparse Inertial Poser: Automatic 3D Human Pose Estimation from Sparse IMUs

(Best Paper, Eurographics 2017)

Marcard, T. V., Rosenhahn, B., Black, M., Pons-Moll, G.

Computer Graphics Forum 36(2), Proceedings of the 38th Annual Conference of the European Association for Computer Graphics (Eurographics), pages: 349-360 , 2017 (article)

Abstract
We address the problem of making human motion capture in the wild more practical by using a small set of inertial sensors attached to the body. Since the problem is heavily under-constrained, previous methods either use a large number of sensors, which is intrusive, or they require additional video input. We take a different approach and constrain the problem by: (i) making use of a realistic statistical body model that includes anthropometric constraints and (ii) using a joint optimization framework to fit the model to orientation and acceleration measurements over multiple frames. The resulting tracker Sparse Inertial Poser (SIP) enables motion capture using only 6 sensors (attached to the wrists, lower legs, back and head) and works for arbitrary human motions. Experiments on the recently released TNT15 dataset show that, using the same number of sensors, SIP achieves higher accuracy than the dataset baseline without using any video data. We further demonstrate the effectiveness of SIP on newly recorded challenging motions in outdoor scenarios such as climbing or jumping over a wall

ps

video pdf Project Page [BibTex]

video pdf Project Page [BibTex]


Thumb xl pami 2017 teaser
Efficient 2D and 3D Facade Segmentation using Auto-Context

Gadde, R., Jampani, V., Marlet, R., Gehler, P.

IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017 (article)

Abstract
This paper introduces a fast and efficient segmentation technique for 2D images and 3D point clouds of building facades. Facades of buildings are highly structured and consequently most methods that have been proposed for this problem aim to make use of this strong prior information. Contrary to most prior work, we are describing a system that is almost domain independent and consists of standard segmentation methods. We train a sequence of boosted decision trees using auto-context features. This is learned using stacked generalization. We find that this technique performs better, or comparable with all previous published methods and present empirical results on all available 2D and 3D facade benchmark datasets. The proposed method is simple to implement, easy to extend, and very efficient at test-time inference.

ps

arXiv Project Page [BibTex]

arXiv Project Page [BibTex]


Thumb xl web image
ClothCap: Seamless 4D Clothing Capture and Retargeting

Pons-Moll, G., Pujades, S., Hu, S., Black, M.

ACM Transactions on Graphics, (Proc. SIGGRAPH), 36(4):73:1-73:15, ACM, New York, NY, USA, 2017, Two first authors contributed equally (article)

Abstract
Designing and simulating realistic clothing is challenging and, while several methods have addressed the capture of clothing from 3D scans, previous methods have been limited to single garments and simple motions, lack detail, or require specialized texture patterns. Here we address the problem of capturing regular clothing on fully dressed people in motion. People typically wear multiple pieces of clothing at a time. To estimate the shape of such clothing, track it over time, and render it believably, each garment must be segmented from the others and the body. Our ClothCap approach uses a new multi-part 3D model of clothed bodies, automatically segments each piece of clothing, estimates the naked body shape and pose under the clothing, and tracks the 3D deformations of the clothing over time. We estimate the garments and their motion from 4D scans; that is, high-resolution 3D scans of the subject in motion at 60 fps. The model allows us to capture a clothed person in motion, extract their clothing, and retarget the clothing to new body shapes. ClothCap provides a step towards virtual try-on with a technology for capturing, modeling, and analyzing clothing in motion.

ps

video project_page paper link (url) DOI Project Page Project Page [BibTex]

video project_page paper link (url) DOI Project Page Project Page [BibTex]


no image
An image-computable psychophysical spatial vision model

Schütt, H. H., Wichmann, F. A.

Journal of Vision, 17(12), 2017 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Methods and measurements to compare men against machines

Wichmann, F. A., Janssen, D. H. J., Geirhos, R., Aguilar, G., Schütt, H. H., Maertens, M., Bethge, M.

Electronic Imaging, pages: 36-45(10), 2017 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
A Comparison of Autoregressive Hidden Markov Models for Multimodal Manipulations With Variable Masses

Kroemer, O., Peters, J.

IEEE Robotics and Automation Letters, 2(2):1101-1108, 2017 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Phase Estimation for Fast Action Recognition and Trajectory Generation in Human-Robot Collaboration

Maeda, G., Ewerton, M., Neumann, G., Lioutikov, R., Peters, J.

International Journal of Robotics Research, 36(13-14):1579-1594, 2017, Special Issue on the Seventeenth International Symposium on Robotics Research (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
A Phase-coded Aperture Camera with Programmable Optics

Chen, J., Hirsch, M., Heintzmann, R., Eberhardt, B., Lensch, H. P. A.

Electronic Imaging, 2017(17):70-75, 2017 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
On Maximum Entropy and Inference

Gresele, L., Marsili, M.

Entropy, 19(12):article no. 642, 2017 (article)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Towards Engagement Models that Consider Individual Factors in HRI: On the Relation of Extroversion and Negative Attitude Towards Robots to Gaze and Speech During a Human-Robot Assembly Task

Ivaldi, S., Lefort, S., Peters, J., Chetouani, M., Provasi, J., Zibetti, E.

International Journal of Social Robotics, 9(1):63-86, 2017 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Non-parametric Policy Search with Limited Information Loss

van Hoof, H., Neumann, G., Peters, J.

Journal of Machine Learning Research , 18(73):1-46, 2017 (article)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Stability of Controllers for Gaussian Process Dynamics

Vinogradska, J., Bischoff, B., Nguyen-Tuong, D., Peters, J.

Journal of Machine Learning Research, 18(100):1-37, 2017 (article)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
SUV-quantification of physiological lung tissue in an integrated PET/MR-system: Impact of lung density and bone tissue

Seith, F., Schmidt, H., Gatidis, S., Bezrukov, I., Schraml, C., Pfannenberg, C., la Fougère, C., Nikolaou, K., Schwenzer, N.

PLOS ONE, 12(5):1-13, 2017 (article)

ei

DOI [BibTex]

DOI [BibTex]

2013


Thumb xl thumb
Branch&Rank for Efficient Object Detection

Lehmann, A., Gehler, P., VanGool, L.

International Journal of Computer Vision, Springer, December 2013 (article)

Abstract
Ranking hypothesis sets is a powerful concept for efficient object detection. In this work, we propose a branch&rank scheme that detects objects with often less than 100 ranking operations. This efficiency enables the use of strong and also costly classifiers like non-linear SVMs with RBF-TeX kernels. We thereby relieve an inherent limitation of branch&bound methods as bounds are often not tight enough to be effective in practice. Our approach features three key components: a ranking function that operates on sets of hypotheses and a grouping of these into different tasks. Detection efficiency results from adaptively sub-dividing the object search space into decreasingly smaller sets. This is inherited from branch&bound, while the ranking function supersedes a tight bound which is often unavailable (except for rather limited function classes). The grouping makes the system effective: it separates image classification from object recognition, yet combines them in a single formulation, phrased as a structured SVM problem. A novel aspect of branch&rank is that a better ranking function is expected to decrease the number of classifier calls during detection. We use the VOC’07 dataset to demonstrate the algorithmic properties of branch&rank.

ps

pdf link (url) [BibTex]

2013


pdf link (url) [BibTex]


Thumb xl tro
Extracting Postural Synergies for Robotic Grasping

Romero, J., Feix, T., Ek, C., Kjellstrom, H., Kragic, D.

Robotics, IEEE Transactions on, 29(6):1342-1352, December 2013 (article)

ps

[BibTex]

[BibTex]


Thumb xl pic cviu13
Markov Random Field Modeling, Inference & Learning in Computer Vision & Image Understanding: A Survey

Wang, C., Komodakis, N., Paragios, N.

Computer Vision and Image Understanding (CVIU), 117(11):1610-1627, November 2013 (article)

Abstract
In this paper, we present a comprehensive survey of Markov Random Fields (MRFs) in computer vision and image understanding, with respect to the modeling, the inference and the learning. While MRFs were introduced into the computer vision field about two decades ago, they started to become a ubiquitous tool for solving visual perception problems around the turn of the millennium following the emergence of efficient inference methods. During the past decade, a variety of MRF models as well as inference and learning methods have been developed for addressing numerous low, mid and high-level vision problems. While most of the literature concerns pairwise MRFs, in recent years we have also witnessed significant progress in higher-order MRFs, which substantially enhances the expressiveness of graph-based models and expands the domain of solvable problems. This survey provides a compact and informative summary of the major literature in this research topic.

ps

Publishers site pdf [BibTex]

Publishers site pdf [BibTex]


Thumb xl ijrr
Vision meets Robotics: The KITTI Dataset

Geiger, A., Lenz, P., Stiller, C., Urtasun, R.

International Journal of Robotics Research, 32(11):1231 - 1237 , Sage Publishing, September 2013 (article)

Abstract
We present a novel dataset captured from a VW station wagon for use in mobile robotics and autonomous driving research. In total, we recorded 6 hours of traffic scenarios at 10-100 Hz using a variety of sensor modalities such as high-resolution color and grayscale stereo cameras, a Velodyne 3D laser scanner and a high-precision GPS/IMU inertial navigation system. The scenarios are diverse, capturing real-world traffic situations and range from freeways over rural areas to inner-city scenes with many static and dynamic objects. Our data is calibrated, synchronized and timestamped, and we provide the rectified and raw image sequences. Our dataset also contains object labels in the form of 3D tracklets and we provide online benchmarks for stereo, optical flow, object detection and other tasks. This paper describes our recording platform, the data format and the utilities that we provide.

avg ps

pdf DOI [BibTex]

pdf DOI [BibTex]


no image
Correlation of Simultaneously Acquired Diffusion-Weighted Imaging and 2-Deoxy-[18F] fluoro-2-D-glucose Positron Emission Tomography of Pulmonary Lesions in a Dedicated Whole-Body Magnetic Resonance/Positron Emission Tomography System

Schmidt, H., Brendle, C., Schraml, C., Martirosian, P., Bezrukov, I., Hetzel, J., Müller, M., Sauter, A., Claussen, C., Pfannenberg, C., Schwenzer, N.

Investigative Radiology, 48(5):247-255, May 2013 (article)

ei

Web [BibTex]

Web [BibTex]