Header logo is


2016


Non-parametric Models for Structured Data and Applications to Human Bodies and Natural Scenes
Non-parametric Models for Structured Data and Applications to Human Bodies and Natural Scenes

Lehrmann, A.

ETH Zurich, July 2016 (phdthesis)

Abstract
The purpose of this thesis is the study of non-parametric models for structured data and their fields of application in computer vision. We aim at the development of context-sensitive architectures which are both expressive and efficient. Our focus is on directed graphical models, in particular Bayesian networks, where we combine the flexibility of non-parametric local distributions with the efficiency of a global topology with bounded treewidth. A bound on the treewidth is obtained by either constraining the maximum indegree of the underlying graph structure or by introducing determinism. The non-parametric distributions in the nodes of the graph are given by decision trees or kernel density estimators. The information flow implied by specific network topologies, especially the resultant (conditional) independencies, allows for a natural integration and control of contextual information. We distinguish between three different types of context: static, dynamic, and semantic. In four different approaches we propose models which exhibit varying combinations of these contextual properties and allow modeling of structured data in space, time, and hierarchies derived thereof. The generative character of the presented models enables a direct synthesis of plausible hypotheses. Extensive experiments validate the developed models in two application scenarios which are of particular interest in computer vision: human bodies and natural scenes. In the practical sections of this work we discuss both areas from different angles and show applications of our models to human pose, motion, and segmentation as well as object categorization and localization. Here, we benefit from the availability of modern datasets of unprecedented size and diversity. Comparisons to traditional approaches and state-of-the-art research on the basis of well-established evaluation criteria allows the objective assessment of our contributions.

ps

pdf [BibTex]


no image
Supplemental material for ’Communication Rate Analysis for Event-based State Estimation’

Ebner, S., Trimpe, S.

Max Planck Institute for Intelligent Systems, January 2016 (techreport)

am ics

PDF [BibTex]

PDF [BibTex]


Deep Learning for Diabetic Retinopathy Diagnostics
Deep Learning for Diabetic Retinopathy Diagnostics

Balles, L.

Heidelberg University, 2016, in cooperation with Bosch Corporate Research (mastersthesis)

[BibTex]

[BibTex]


no image
Statische und dynamische Magnetisierungseigenschaften nanoskaliger Überstrukturen

Gräfe, J.

Universität Stuttgart, Stuttgart (und Cuvillier Verlag, Göttingen), 2016 (phdthesis)

mms

[BibTex]

[BibTex]


no image
Gepinnte Bahnmomente in magnetischen Heterostrukturen

Audehm, P.

Universität Stuttgart, Stuttgart (und Cuvillier Verlag, Göttingen), 2016 (phdthesis)

mms

[BibTex]

[BibTex]


no image
Austauschgekoppelte Moden in magnetischen Vortexstrukturen

Dieterle, G.

Universität Stuttgart, Stuttgart, 2016 (phdthesis)

mms

[BibTex]

[BibTex]


no image
Density matrix calculations for the ultrafast demagnetization after femtosecond laser pulses

Weng, Weikai

Universität Stuttgart, Stuttgart, 2016 (mastersthesis)

mms

[BibTex]

[BibTex]


no image
Deep Learning for Diabetic Retinopathy Diagnostics

Balles, Lukas

Heidelberg University, 2016 (mastersthesis)

[BibTex]

[BibTex]


no image
Helium und Hydrogen Isotope Adsorption and Separation in Metal-Organic Frameworks

Zaiser, Ingrid

Universität Stuttgart, Stuttgart (und Cuvillier Verlag, Göttingen), 2016 (phdthesis)

mms

[BibTex]

[BibTex]

2013


no image
Determination of an Analysis Procedure for FEM-Based Fatigue Calculations

Serhat, G.

Technical University of Munich, December 2013 (mastersthesis)

hi

[BibTex]

2013


[BibTex]


no image
Camera-specific Image Denoising

Schober, M.

Eberhard Karls Universität Tübingen, Germany, October 2013 (diplomathesis)

ei pn

PDF [BibTex]

PDF [BibTex]


Puppet Flow
Puppet Flow

Zuffi, S., Black, M. J.

(7), Max Planck Institute for Intelligent Systems, October 2013 (techreport)

Abstract
We introduce Puppet Flow (PF), a layered model describing the optical flow of a person in a video sequence. We consider video frames composed by two layers: a foreground layer corresponding to a person, and background. We model the background as an affine flow field. The foreground layer, being a moving person, requires reasoning about the articulated nature of the human body. We thus represent the foreground layer with the Deformable Structures model (DS), a parametrized 2D part-based human body representation. We call the motion field defined through articulated motion and deformation of the DS model, a Puppet Flow. By exploiting the DS representation, Puppet Flow is a parametrized optical flow field, where parameters are the person's pose, gender and body shape.

ps

pdf Project Page Project Page [BibTex]

pdf Project Page Project Page [BibTex]


no image
D2.1.4 RoCKIn@Work - Innovation in Mobile Industrial Manipulation Competition Design, Rule Book, and Scenario Construction

Ahmad, A., Awaad, I., Amigoni, F., Berghofer, J., Bischoff, R., Bonarini, A., Dwiputra, R., Hegger, F., Hochgeschwender, N., Iocchi, L., Kraetzschmar, G., Lima, P., Matteucci, M., Nardi, D., Schneider, S.

(FP7-ICT-601012 Revision 0.7), RoCKIn - Robot Competitions Kick Innovation in Cognitive Systems and Robotics, sep 2013 (techreport)

Abstract
RoCKIn is a EU-funded project aiming to foster scientific progress and innovation in cognitive systems and robotics through the design and implementation of competitions. An additional objective of RoCKIn is to increase public awareness of the current state-of-the-art in robotics in Europe and to demonstrate the innovation potential of robotics applications for solving societal challenges and improving the competitiveness of Europe in the global markets. In order to achieve these objectives, RoCKIn develops two competitions, one for domestic service robots (RoCKIn@Home) and one for industrial robots in factories (RoCKIn-@Work). These competitions are designed around challenges that are based on easy-to-communicate and convincing user stories, which catch the interest of both the general public and the scientifc community. The latter is in particular interested in solving open scientific challenges and to thoroughly assess, compare, and evaluate the developed approaches with competing ones. To allow this to happen, the competitions are designed to meet the requirements of benchmarking procedures and good experimental methods. The integration of benchmarking technology with the competition concept is one of the main objectives of RoCKIn. This document describes the first version of the RoCKIn@Work competition, which will be held for the first time in 2014. The first chapter of the document gives a brief overview, outlining the purpose and objective of the competition, the methodological approach taken by the RoCKIn project, the user story upon which the competition is based, the structure and organization of the competition, and the commonalities and differences with the RoboCup@Work competition, which served as inspiration for RoCKIn@Work. The second chapter provides details on the user story and analyzes the scientific and technical challenges it poses. Consecutive chapters detail the competition scenario, the competition design, and the organization of the competition. The appendices contain information on a library of functionalities, which we believe are needed, or at least useful, for building competition entries, details on the scenario construction, and a detailed account of the benchmarking infrastructure needed — and provided by RoCKIn.

ps

[BibTex]

[BibTex]


no image
D2.1.1 RoCKIn@Home - A Competition for Domestic Service Robots Competition Design, Rule Book, and Scenario Construction

Ahmad, A., Awaad, I., Amigoni, F., Berghofer, J., Bischoff, R., Bonarini, A., Dwiputra, R., Hegger, F., Hochgeschwender, N., Iocchi, L., Kraetzschmar, G., Lima, P., Matteucci, M., Nardi, D., Schneider, S.

(FP7-ICT-601012 Revision 0.7), RoCKIn - Robot Competitions Kick Innovation in Cognitive Systems and Robotics, sep 2013 (techreport)

Abstract
RoCKIn is a EU-funded project aiming to foster scientific progress and innovation in cognitive systems and robotics through the design and implementation of competitions. An additional objective of RoCKIn is to increase public awareness of the current state-of-the-art in robotics in Europe and to demonstrate the innovation potential of robotics applications for solving societal challenges and improving the competitiveness of Europe in the global markets. In order to achieve these objectives, RoCKIn develops two competitions, one for domestic service robots (RoCKIn@Home) and one for industrial robots in factories (RoCKIn-@Work). These competitions are designed around challenges that are based on easy-to-communicate and convincing user stories, which catch the interest of both the general public and the scientifc community. The latter is in particular interested in solving open scientific challenges and to thoroughly assess, compare, and evaluate the developed approaches with competing ones. To allow this to happen, the competitions are designed to meet the requirements of benchmarking procedures and good experimental methods. The integration of benchmarking technology with the competition concept is one of the main objectives of RoCKIn. This document describes the first version of the RoCKIn@Home competition, which will be held for the first time in 2014. The first chapter of the document gives a brief overview, outlining the purpose and objective of the competition, the methodological approach taken by the RoCKIn project, the user story upon which the competition is based, the structure and organization of the competition, and the commonalities and differences with the RoboCup@Home competition, which served as inspiration for RoCKIn@Home. The second chapter provides details on the user story and analyzes the scientific and technical challenges it poses. Consecutive chapters detail the competition scenario, the competition design, and the organization of the competition. The appendices contain information on a library of functionalities, which we believe are needed, or at least useful, for building competition entries, details on the scenario construction, and a detailed account of the benchmarking infrastructure needed — and provided by RoCKIn.

ps

[BibTex]

[BibTex]


Statistics on Manifolds with Applications to Modeling Shape Deformations
Statistics on Manifolds with Applications to Modeling Shape Deformations

Freifeld, O.

Brown University, August 2013 (phdthesis)

Abstract
Statistical models of non-rigid deformable shape have wide application in many fi elds, including computer vision, computer graphics, and biometry. We show that shape deformations are well represented through nonlinear manifolds that are also matrix Lie groups. These pattern-theoretic representations lead to several advantages over other alternatives, including a principled measure of shape dissimilarity and a natural way to compose deformations. Moreover, they enable building models using statistics on manifolds. Consequently, such models are superior to those based on Euclidean representations. We demonstrate this by modeling 2D and 3D human body shape. Shape deformations are only one example of manifold-valued data. More generally, in many computer-vision and machine-learning problems, nonlinear manifold representations arise naturally and provide a powerful alternative to Euclidean representations. Statistics is traditionally concerned with data in a Euclidean space, relying on the linear structure and the distances associated with such a space; this renders it inappropriate for nonlinear spaces. Statistics can, however, be generalized to nonlinear manifolds. Moreover, by respecting the underlying geometry, the statistical models result in not only more e ffective analysis but also consistent synthesis. We go beyond previous work on statistics on manifolds by showing how, even on these curved spaces, problems related to modeling a class from scarce data can be dealt with by leveraging information from related classes residing in di fferent regions of the space. We show the usefulness of our approach with 3D shape deformations. To summarize our main contributions: 1) We de fine a new 2D articulated model -- more expressive than traditional ones -- of deformable human shape that factors body-shape, pose, and camera variations. Its high realism is obtained from training data generated from a detailed 3D model. 2) We defi ne a new manifold-based representation of 3D shape deformations that yields statistical deformable-template models that are better than the current state-of-the- art. 3) We generalize a transfer learning idea from Euclidean spaces to Riemannian manifolds. This work demonstrates the value of modeling manifold-valued data and their statistics explicitly on the manifold. Specifi cally, the methods here provide new tools for shape analysis.

ps

pdf Project Page [BibTex]


no image
D1.1 Specification of General Features of Scenarios and Robots for Benchmarking Through Competitions

Ahmad, A., Awaad, I., Amigoni, F., Berghofer, J., Bischoff, R., Bonarini, A., Dwiputra, R., Fontana, G., Hegger, F., Hochgeschwender, N., Iocchi, L., Kraetzschmar, G., Lima, P., Matteucci, M., Nardi, D., Schiaffonati, V., Schneider, S.

(FP7-ICT-601012 Revision 1.0), RoCKIn - Robot Competitions Kick Innovation in Cognitive Systems and Robotics, July 2013 (techreport)

Abstract
RoCKIn is a EU-funded project aiming to foster scientific progress and innovation in cognitive systems and robotics through the design and implementation of competitions. An additional objective of RoCKIn is to increase public awareness of the current state-of-the-art in robotics and the innovation potential of robotics applications. From these objectives several requirements for the work performed in RoCKIn can be derived: The RoCKIn competitions must start from convincing, easy-to-communicate user stories, that catch the attention of relevant stakeholders, the media, and the crowd. The user stories play the role of a mid- to long-term vision for a competition. Preferably, the user stories address economic, societal, or environmental problems. The RoCKIn competitions must pose open scientific challenges of interest to sufficiently many researchers to attract existing and new teams of robotics researchers for participation in the competition. The competitions need to promise some suitable reward, such as recognition in the scientific community, publicity for a team’s work, awards, or prize money, to justify the effort a team puts into the development of a competition entry. The competitions should be designed in such a way that they reward general, scientifically sound solutions to the challenge problems; such general solutions should score better than approaches that work only in narrowly defined contexts and are considred over-engineered. The challenges motivating the RoCKIn competitions must be broken down into suitable intermediate goals that can be reached with a limited team effort until the next competition and the project duration. The RoCKIn competitions must be well-defined and well-designed, with comprehensive rule books and instructions for the participants in order to guarantee a fair competition. The RoCKIn competitions must integrate competitions with benchmarking in order to provide comprehensive feedback for the teams about the suitability of particular functional modules, their overall architecture, and system integration. This document takes the first steps towards the RoCKIn goals. After outlining our approach, we present several user stories for further discussion within the community. The main objectives of this document are to identify and document relevant scenario features and the tasks and functionalities subject for benchmarking in the competitions.

ps

[BibTex]

[BibTex]


no image
SocRob-MSL 2013 Team Description Paper for Middle Sized League

Messias, J., Ahmad, A., Reis, J., Serafim, M., Lima, P.

17th Annual RoboCup International Symposium 2013, July 2013 (techreport)

Abstract
This paper describes the status of the SocRob MSL robotic soccer team as required by the RoboCup 2013 qualification procedures. The team’s latest scientific and technical developments, since its last participation in RoboCup MSL, include further advances in cooperative perception; novel communication methods for distributed robotics; progressive deployment of the ROS middleware; improved localization through feature tracking and Mixture MCL; novel planning methods based on Petri nets and decision-theoretic frameworks; and hardware developments in ball-handling/kicking devices.

ps

link (url) [BibTex]

link (url) [BibTex]


Learning and Optimization with Submodular Functions
Learning and Optimization with Submodular Functions

Sankaran, B., Ghazvininejad, M., He, X., Kale, D., Cohen, L.

ArXiv, May 2013 (techreport)

Abstract
In many naturally occurring optimization problems one needs to ensure that the definition of the optimization problem lends itself to solutions that are tractable to compute. In cases where exact solutions cannot be computed tractably, it is beneficial to have strong guarantees on the tractable approximate solutions. In order operate under these criterion most optimization problems are cast under the umbrella of convexity or submodularity. In this report we will study design and optimization over a common class of functions called submodular functions. Set functions, and specifically submodular set functions, characterize a wide variety of naturally occurring optimization problems, and the property of submodularity of set functions has deep theoretical consequences with wide ranging applications. Informally, the property of submodularity of set functions concerns the intuitive principle of diminishing returns. This property states that adding an element to a smaller set has more value than adding it to a larger set. Common examples of submodular monotone functions are entropies, concave functions of cardinality, and matroid rank functions; non-monotone examples include graph cuts, network flows, and mutual information. In this paper we will review the formal definition of submodularity; the optimization of submodular functions, both maximization and minimization; and finally discuss some applications in relation to learning and reasoning using submodular functions.

am

arxiv link (url) [BibTex]

arxiv link (url) [BibTex]


Probabilistic Models for 3D Urban Scene Understanding from Movable Platforms
Probabilistic Models for 3D Urban Scene Understanding from Movable Platforms

Geiger, A.

Karlsruhe Institute of Technology, Karlsruhe Institute of Technology, April 2013 (phdthesis)

Abstract
Visual 3D scene understanding is an important component in autonomous driving and robot navigation. Intelligent vehicles for example often base their decisions on observations obtained from video cameras as they are cheap and easy to employ. Inner-city intersections represent an interesting but also very challenging scenario in this context: The road layout may be very complex and observations are often noisy or even missing due to heavy occlusions. While Highway navigation and autonomous driving on simple and annotated intersections have already been demonstrated successfully, understanding and navigating general inner-city crossings with little prior knowledge remains an unsolved problem. This thesis is a contribution to understanding multi-object traffic scenes from video sequences. All data is provided by a camera system which is mounted on top of the autonomous driving platform AnnieWAY. The proposed probabilistic generative model reasons jointly about the 3D scene layout as well as the 3D location and orientation of objects in the scene. In particular, the scene topology, geometry as well as traffic activities are inferred from short video sequences. The model takes advantage of monocular information in the form of vehicle tracklets, vanishing lines and semantic labels. Additionally, the benefit of stereo features such as 3D scene flow and occupancy grids is investigated. Motivated by the impressive driving capabilities of humans, no further information such as GPS, lidar, radar or map knowledge is required. Experiments conducted on 113 representative intersection sequences show that the developed approach successfully infers the correct layout in a variety of difficult scenarios. To evaluate the importance of each feature cue, experiments with different feature combinations are conducted. Additionally, the proposed method is shown to improve object detection and object orientation estimation performance.

avg ps

pdf [BibTex]

pdf [BibTex]


A Study of X-Ray Image Perception for Pneumoconiosis Detection
A Study of X-Ray Image Perception for Pneumoconiosis Detection

Jampani, V.

IIIT-Hyderabad, Hyderabad, India, January 2013 (mastersthesis)

Abstract
Pneumoconiosis is an occupational lung disease caused by the inhalation of industrial dust. Despite the increasing safety measures and better work place environments, pneumoconiosis is deemed to be the most common occupational disease in the developing countries like India and China. Screening and assessment of this disease is done through radiological observation of chest x-rays. Several studies have shown the significant inter and intra reader observer variation in the diagnosis of this disease, showing the complexity of the task and importance of the expertise in diagnosis. The present study is aimed at understanding the perceptual and cognitive factors affecting the reading of chest x-rays of pneumoconiosis patients. Understanding these factors helps in developing better image acquisition systems, better training regimen for radiologists and development of better computer aided diagnostic (CAD) systems. We used an eye tracking experiment to study the various factors affecting the assessment of this diffused lung disease. Specifically, we aimed at understanding the role of expertize, contralateral symmetric (CS) information present in chest x-rays on the diagnosis and the eye movements of the observers. We also studied the inter and intra observer fixation consistency along with the role of anatomical and bottom up saliency features in attracting the gaze of observers of different expertize levels, to get better insights into the effect of bottom up and top down visual saliency on the eye movements of observers. The experiment is conducted in a room dedicated to eye tracking experiments. Participants consisting of novices (3), medical students (12), residents (4) and staff radiologists (4) were presented with good quality PA chest X-rays, and were asked to give profusion ratings for each of the 6 lung zones. Image set consisting of 17 normal full chest x-rays and 16 single lung images are shown to the participants in random order. Time of the diagnosis and the eye movements are also recorded using a remote head free eye tracker. Results indicated that Expertise and CS play important roles in the diagnosis of pneumoconiosis. Novices and medical students are slow and inefficient whereas, residents and staff are quick and efficient. A key finding of our study is that the presence of CS information alone does not help improve diagnosis as much as learning how to use the information. This learning appears to be gained from focused training and years of experience. Hence, good training for radiologists and careful observation of each lung zone may improve the quality of diagnostic results. For residents, the eye scanning strategies play an important role in using the CS information present in chest radiographs; however, in staff radiologists, peripheral vision or higher-level cognitive processes seems to play role in using the CS information. There is a reasonably good inter and intra observer fixation consistency suggesting the use of similar viewing strategies. Experience is helping the observers to develop new visual strategies based on the image content so that they can quickly and efficiently assess the disease level. First few fixations seem to be playing an important role in choosing the visual strategy, appropriate for the given image. Both inter-rib and rib regions are given equal importance by the observers. Despite reading of chest x-rays being highly task dependent, bottom up saliency is shown to have played an important role in attracting the fixations of the observers. This role of bottom up saliency seems to be more in lower expertize groups compared to that of higher expertize groups. Both bottom up and top down influence of visual fixations seems to change with time. The relative role of top down and bottom up influences of visual attention is still not completely understood and it remains the part of future work. Based on our experimental results, we have developed an extended saliency model by combining the bottom up saliency and the saliency of lung regions in a chest x-ray. This new saliency model performed significantly better than bottom-up saliency in predicting the gaze of the observers in our experiment. Even though, the model is a simple combination of bottom-up saliency maps and segmented lung masks, this demonstrates that even basic models using simple image features can predict the fixations of the observers to a good accuracy. Experimental analysis suggested that the factors affecting the reading of chest x-rays of pneumoconiosis are complex and varied. A good understanding of these factors definitely helps in the development of better radiological screening of pneumoconiosis through improved training and also through the use of improved CAD tools. The presented work is an attempt to get insights into what these factors are and how they modify the behavior of the observers.

ps

pdf [BibTex]

pdf [BibTex]


A Quantitative Analysis of Current Practices in Optical Flow Estimation and the Principles Behind Them
A Quantitative Analysis of Current Practices in Optical Flow Estimation and the Principles Behind Them

Sun, D., Roth, S., Black, M. J.

(CS-10-03), Brown University, Department of Computer Science, January 2013 (techreport)

ps

pdf [BibTex]

pdf [BibTex]


no image
Modelling and Learning Approaches to Image Denoising

Burger, HC.

Eberhard Karls Universität Tübingen, Germany, 2013 (phdthesis)

ei

[BibTex]

[BibTex]


no image
Animating Samples from Gaussian Distributions

Hennig, P.

(8), Max Planck Institute for Intelligent Systems, Tübingen, Germany, 2013 (techreport)

ei pn

PDF [BibTex]

PDF [BibTex]


no image
Linear mixed models for genome-wide association studies

Lippert, C.

University of Tübingen, Germany, 2013 (phdthesis)

ei

[BibTex]

[BibTex]


no image
Maximizing Kepler science return per telemetered pixel: Detailed models of the focal plane in the two-wheel era

Hogg, D. W., Angus, R., Barclay, T., Dawson, R., Fergus, R., Foreman-Mackey, D., Harmeling, S., Hirsch, M., Lang, D., Montet, B. T., Schiminovich, D., Schölkopf, B.

arXiv:1309.0653, 2013 (techreport)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Maximizing Kepler science return per telemetered pixel: Searching the habitable zones of the brightest stars

Montet, B. T., Angus, R., Barclay, T., Dawson, R., Fergus, R., Foreman-Mackey, D., Harmeling, S., Hirsch, M., Hogg, D. W., Lang, D., Schiminovich, D., Schölkopf, B.

arXiv:1309.0654, 2013 (techreport)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Modeling and Learning Complex Motor Tasks: A case study on Robot Table Tennis

Mülling, K.

Technical University Darmstadt, Germany, 2013 (phdthesis)

ei

[BibTex]

[BibTex]


no image
Intention Inference and Decision Making with Hierarchical Gaussian Process Dynamics Models

Wang, Z.

Technical University Darmstadt, Germany, 2013 (phdthesis)

ei

[BibTex]


no image
Quantum kinetic theory for demagnetization after femtosecond laser pulses

Teeny, N.

Universität Stuttgart, Stuttgart, 2013 (mastersthesis)

mms

[BibTex]

[BibTex]

2009


no image
Learning an Interactive Segmentation System

Nickisch, H., Kohli, P., Rother, C.

Max Planck Institute for Biological Cybernetics, December 2009 (techreport)

Abstract
Many successful applications of computer vision to image or video manipulation are interactive by nature. However, parameters of such systems are often trained neglecting the user. Traditionally, interactive systems have been treated in the same manner as their fully automatic counterparts. Their performance is evaluated by computing the accuracy of their solutions under some fixed set of user interactions. This paper proposes a new evaluation and learning method which brings the user in the loop. It is based on the use of an active robot user - a simulated model of a human user. We show how this approach can be used to evaluate and learn parameters of state-of-the-art interactive segmentation systems. We also show how simulated user models can be integrated into the popular max-margin method for parameter learning and propose an algorithm to solve the resulting optimisation problem.

ei

Web [BibTex]

2009


Web [BibTex]


no image
Detection of objects in noisy images and site percolation on square lattices

Langovoy, M., Wittich, O.

(2009-035), EURANDOM, Technische Universiteit Eindhoven, November 2009 (techreport)

Abstract
We propose a novel probabilistic method for detection of objects in noisy images. The method uses results from percolation and random graph theories. We present an algorithm that allows to detect objects of unknown shapes in the presence of random noise. Our procedure substantially differs from wavelets-based algorithms. The algorithm has linear complexity and exponential accuracy and is appropriate for real-time systems. We prove results on consistency and algorithmic complexity of our procedure.

ei

PDF [BibTex]

PDF [BibTex]


no image
An Incremental GEM Framework for Multiframe Blind Deconvolution, Super-Resolution, and Saturation Correction

Harmeling, S., Sra, S., Hirsch, M., Schölkopf, B.

(187), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, November 2009 (techreport)

Abstract
We develop an incremental generalized expectation maximization (GEM) framework to model the multiframe blind deconvolution problem. A simplistic version of this problem was recently studied by Harmeling etal~cite{harmeling09}. We solve a more realistic version of this problem which includes the following major features: (i) super-resolution ability emph{despite} noise and unknown blurring; (ii) saturation-correction, i.e., handling of overexposed pixels that can otherwise confound the image processing; and (iii) simultaneous handling of color channels. These features are seamlessly integrated into our incremental GEM framework to yield simple but efficient multiframe blind deconvolution algorithms. We present technical details concerning critical steps of our algorithms, especially to highlight how all operations can be written using matrix-vector multiplications. We apply our algorithm to real-world images from astronomy and super resolution tasks. Our experimental results show that our methods yield improve d resolution and deconvolution at the same time.

ei

PDF [BibTex]

PDF [BibTex]


no image
Efficient Filter Flow for Space-Variant Multiframe Blind Deconvolution

Hirsch, M., Sra, S., Schölkopf, B., Harmeling, S.

(188), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, November 2009 (techreport)

Abstract
Ultimately being motivated by facilitating space-variant blind deconvolution, we present a class of linear transformations, that are expressive enough for space-variant filters, but at the same time especially designed for efficient matrix-vector-multiplications. Successful results on astronomical imaging through atmospheric turbulences and on noisy magnetic resonance images of constantly moving objects demonstrate the practical significance of our approach.

ei

PDF [BibTex]

PDF [BibTex]


no image
Algebraic polynomials and moments of stochastic integrals

Langovoy, M.

(2009-031), EURANDOM, Technische Universiteit Eindhoven, October 2009 (techreport)

ei

PDF [BibTex]

PDF [BibTex]


no image
Kernel Learning Approaches for Image Classification

Gehler, PV.

Biologische Kybernetik, Universität des Saarlandes, Saarbrücken, Germany, October 2009 (phdthesis)

Abstract
This thesis extends the use of kernel learning techniques to specific problems of image classification. Kernel learning is a paradigm in the field of machine learning that generalizes the use of inner products to compute similarities between arbitrary objects. In image classification one aims to separate images based on their visual content. We address two important problems that arise in this context: learning with weak label information and combination of heterogeneous data sources. The contributions we report on are not unique to image classification, and apply to a more general class of problems. We study the problem of learning with label ambiguity in the multiple instance learning framework. We discuss several different image classification scenarios that arise in this context and argue that the standard multiple instance learning requires a more detailed disambiguation. Finally we review kernel learning approaches proposed for this problem and derive a more efficient algorithm to solve them. The multiple kernel learning framework is an approach to automatically select kernel parameters. We extend it to its infinite limit and present an algorithm to solve the resulting problem. This result is then applied in two directions. We show how to learn kernels that adapt to the special structure of images. Finally we compare different ways of combining image features for object classification and present significant improvements compared to previous methods.

ei

PDF [BibTex]

PDF [BibTex]


no image
A PAC-Bayesian Approach to Structure Learning

Seldin, Y.

Biologische Kybernetik, The Hebrew University of Jerusalem, Israel, September 2009 (phdthesis)

ei

PDF [BibTex]

PDF [BibTex]


no image
Expectation Propagation on the Maximum of Correlated Normal Variables

Hennig, P.

Cavendish Laboratory: University of Cambridge, July 2009 (techreport)

Abstract
Many inference problems involving questions of optimality ask for the maximum or the minimum of a finite set of unknown quantities. This technical report derives the first two posterior moments of the maximum of two correlated Gaussian variables and the first two posterior moments of the two generating variables (corresponding to Gaussian approximations minimizing relative entropy). It is shown how this can be used to build a heuristic approximation to the maximum relationship over a finite set of Gaussian variables, allowing approximate inference by Expectation Propagation on such quantities.

ei pn

Web [BibTex]

Web [BibTex]


no image
Consistent Nonparametric Tests of Independence

Gretton, A., Györfi, L.

(172), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, July 2009 (techreport)

Abstract
Three simple and explicit procedures for testing the independence of two multi-dimensional random variables are described. Two of the associated test statistics (L1, log-likelihood) are defined when the empirical distribution of the variables is restricted to finite partitions. A third test statistic is defined as a kernel-based independence measure. Two kinds of tests are provided. Distribution-free strong consistent tests are derived on the basis of large deviation bounds on the test statistcs: these tests make almost surely no Type I or Type II error after a random sample size. Asymptotically alpha-level tests are obtained from the limiting distribution of the test statistics. For the latter tests, the Type I error converges to a fixed non-zero value alpha, and the Type II error drops to zero, for increasing sample size. All tests reject the null hypothesis of independence if the test statistics become large. The performance of the tests is evaluated experimentally on benchmark data.

ei

PDF [BibTex]

PDF [BibTex]


no image
ISocRob-MSL 2009 Team Description Paper for Middle Sized League

Lima, P., Santos, J., Estilita, J., Barbosa, M., Ahmad, A., Carreira, J.

13th Annual RoboCup International Symposium 2009, July 2009 (techreport)

Abstract
This paper describes the status of the ISocRob MSL roboticsoccer team as required by the RoboCup 2009 qualification procedures.Since its previous participation in RoboCup, the ISocRob team has car-ried out significant developments in various topics, the most relevantof which are presented here. These include self-localization, 3D objecttracking and cooperative object localization, motion control and rela-tional behaviors. A brief description of the hardware of the ISocRobrobots and of the software architecture adopted by the team is also in-cluded.

ps

[BibTex]

[BibTex]


no image
Semi-supervised subspace analysis of human functional magnetic resonance imaging data

Shelton, J., Blaschko, M., Bartels, A.

(185), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, May 2009 (techreport)

Abstract
Kernel Canonical Correlation Analysis is a very general technique for subspace learning that incorporates PCA and LDA as special cases. Functional magnetic resonance imaging (fMRI) acquired data is naturally amenable to these techniques as data are well aligned. fMRI data of the human brain is a particularly interesting candidate. In this study we implemented various supervised and semi-supervised versions of KCCA on human fMRI data, with regression to single- and multi-variate labels (corresponding to video content subjects viewed during the image acquisition). In each variate condition, the semi-supervised variants of KCCA performed better than the supervised variants, including a supervised variant with Laplacian regularization. We additionally analyze the weights learned by the regression in order to infer brain regions that are important to different types of visual processing.

ei

PDF [BibTex]

PDF [BibTex]


no image
Kernel Methods in Computer Vision:Object Localization, Clustering,and Taxonomy Discovery

Blaschko, MB.

Biologische Kybernetik, Technische Universität Berlin, Berlin, Germany, March 2009 (phdthesis)

ei

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
Model selection, large deviations and consistency of data-driven tests

Langovoy, M.

(2009-007), EURANDOM, Technische Universiteit Eindhoven, March 2009 (techreport)

Abstract
We consider three general classes of data-driven statistical tests. Neyman's smooth tests, data-driven score tests and data-driven score tests for statistical inverse problems serve as important special examples for the classes of tests under consideration. Our tests are additionally incorporated with model selection rules. The rules are based on the penalization idea. Most of the optimal penalties, derived in statistical literature, can be used in our tests. We prove general consistency theorems for the tests from those classes. Our proofs make use of large deviations inequalities for deterministic and random quadratic forms. The paper shows that the tests can be applied for simple and composite parametric, semi- and nonparametric hypotheses. Applications to testing in statistical inverse problems and statistics for stochastic processes are also presented..

ei

PDF [BibTex]

PDF [BibTex]