Header logo is


2011


no image
Projected Newton-type methods in machine learning

Schmidt, M., Kim, D., Sra, S.

In Optimization for Machine Learning, pages: 305-330, (Editors: Sra, S., Nowozin, S. and Wright, S. J.), MIT Press, Cambridge, MA, USA, December 2011 (inbook)

Abstract
We consider projected Newton-type methods for solving large-scale optimization problems arising in machine learning and related fields. We first introduce an algorithmic framework for projected Newton-type methods by reviewing a canonical projected (quasi-)Newton method. This method, while conceptually pleasing, has a high computation cost per iteration. Thus, we discuss two variants that are more scalable, namely, two-metric projection and inexact projection methods. Finally, we show how to apply the Newton-type framework to handle non-smooth objectives. Examples are provided throughout the chapter to illustrate machine learning applications of our framework.

ei

PDF Web [BibTex]

2011


PDF Web [BibTex]


no image
Combined whole-body PET/MR imaging: MR contrast agents do not affect the quantitative accuracy of PET following attenuation correction

Lois, C., Kupferschläger, J., Bezrukov, I., Schmidt, H., Werner, M., Mannheim, J., Pichler, B., Schwenzer, N., Beyer, T.

(SST15-05 ), 97th Scientific Assemble and Annual Meeting of the Radiological Society of North America (RSNA), December 2011 (talk)

Abstract
PURPOSE Combined PET/MR imaging entails the use of MR contrast agents (MRCA) as part of integrated protocols. We assess additional attenuation of the PET emission signals in the presence of oral and intraveneous (iv) MRCA made up of iron oxide and Gd-chelates, respectively. METHOD AND MATERIALS Phantom scans were performed on a clinical PET/CT (Biograph HiRez16, Siemens) and integrated whole-body PET/MR (Biograph mMR, Siemens) using oral (Lumirem) and intraveneous (Gadovist) MRCA. Reference PET attenuation values were determined on a small-animal PET (Inveon, Siemens) using standard PET transmission imaging (TX). Seven syringes of 5mL were filled with (a) Water, (b) Lumirem_100 (100% conc.), (c) Gadovist_100 (100%), (d) Gadovist_18 (18%), (e) Gadovist_02 (0.2%), (f) Imeron-400 CT iv-contrast (100%) and (g) Imeron-400 (2.4%). The same set of syringes was scanned on CT (Sensation16, Siemens) at 120kVp and 160mAs. The effect of MRCA on the attenuation of PET emission data was evaluated using a 20cm cylinder filled uniformly with [18F]-FDG (FDG) in water (BGD). Three 4.5cm diameter cylinders were inserted into the phantom: (C1) Teflon, (C2) Water+FDG (2:1) and (C3) Lumirem_100+FDG (2:1). Two 50mL syringes filled with Gadovist_02+FDG (Sy1) and water+FDG (Sy2) were attached to the sides of (C1) to mimick the effects of iv-contrast in vessels near bone. Syringe-to-background activity ratio was 4-to-1. PET emission data were acquired for 10min each using the PET/CT and the PET/MR. Images were reconstructed using CT- and MR-based attenuation correction. RESULTS Mean linear PET attenuation (cm-1) on TX was (a) 0.098, (b) 0.098, (c) 0.300, (d) 0.134, (e) 0.095, (f) 0.397 and (g) 0.105. Corresponding CT attenuation (HU) was: (a) 5, (b) 14, (c) 3070, (d) 1040, (e) 13, (f) 3070 and (g) 347. Lumirem had little effect on PET attenuation with (C3) being 13% and 10% higher than (C2) on PET/CT and PET/MR, respectively. Gadovist_02 had even smaller effects with (Sy1) being 2.5% lower than (Sy2) on PET/CT and 1.2% higher than (Sy2) on PET/MR. CONCLUSION MRCA in high and clinically relevant concentrations have attenuation values similar to that of CT contrast and water, respectively. In clinical PET/MR scenarios MRCA are not expected to lead to significant attenuation of the PET emission signals.

ei

Web [BibTex]

Web [BibTex]


no image
Cooperative Cuts: a new use of submodularity in image segmentation

Jegelka, S.

Second I.S.T. Austria Symposium on Computer Vision and Machine Learning, October 2011 (talk)

ei

Web [BibTex]

Web [BibTex]


no image
Effect of MR Contrast Agents on Quantitative Accuracy of PET in Combined Whole-Body PET/MR Imaging

Lois, C., Bezrukov, I., Schmidt, H., Schwenzer, N., Werner, M., Pichler, B., Kupferschläger, J., Beyer, T.

2011(MIC3-3), 2011 IEEE Nuclear Science Symposium, Medical Imaging Conference (NSS-MIC), October 2011 (talk)

Abstract
Combined whole-body PET/MR systems are being tested in clinical practice today. Integrated imaging protocols entail the use of MR contrast agents (MRCA) that could bias PET attenuation correction. In this work, we assess the effect of MRCA in PET/MR imaging. We analyze the effect of oral and intravenous MRCA on PET activity after attenuation correction. We conclude that in clinical scenarios, MRCA are not expected to lead to significant attenuation of PET signals, and that attenuation maps are not biased after the ingestion of adequate oral contrasts.

ei

Web [BibTex]

Web [BibTex]


no image
First Results on Patients and Phantoms of a Fully Integrated Clinical Whole-Body PET/MRI

Schmidt, H., Schwenzer, N., Bezrukov, I., Kolb, A., Mantlik, F., Kupferschläger, J., Lois, C., Sauter, A., Brendle, C., Pfannenberg, C., Pichler, B.

2011(J2-8), 2011 IEEE Nuclear Science Symposium, Medical Imaging Conference (NSS-MIC), October 2011 (talk)

Abstract
First clinical fully integrated whole-body PET/MR scanners are just entering the field. Here, we present studies toward quantification accuracy and variation within the PET field of view of small lesions from our BrainPET/MRI, a dedicated clinical brain scanner which was installed three years ago in Tbingen. Also, we present first results for patient and phantom scans of a fully integral whole-body PET/MRI, which was installed two months ago at our department. The quantification accuracy and homogeneity of the BrainPET-Insert (Siemens Medical Solutions, Germany) installed inside the magnet bore of a clinical 3T MRI scanner (Magnetom TIM Trio, Siemens Medical Solutions, Germany) was evaluated by using eight hollow spheres with inner diameters from 3.95 to 7.86 mm placed at different positions inside a homogeneous cylinder phantom with an 9:1 and 6:1 sphere to background ratio. The quantification accuracy for small lesions at different positions in the PET FoV shows a standard deviation of up to 11% and is acceptable for quantitative brain studies where the homogeneity of quantification on the entire FoV is essental. Image quality and resolution of the new Siemens whole-body PET/MR system (Biograph mMR, Siemens Medical Solutions, Germany) was evaluated according to the NEMA NU2 2007 protocol using a body phantom containing six spheres with inner diameter from 10 to 37 mm at sphere to background ratios of 8:1 and 4:1 and the F-18 point sources located at different positions inside the PET FoV, respectively. The evaluation of the whole-body PET/MR system reveals a good PET image quality and resolution comparable to state-of-the-art clinical PET/CT scanners. First images of patient studies carried out at the whole-body PET/MR are presented highlighting the potency of combined PET/MR imaging.

ei

Web [BibTex]

Web [BibTex]


no image
Effect of MR contrast agents on quantitative accuracy of PET in combined whole-body PET/MR imaging

Lois, C., Kupferschläger, J., Bezrukov, I., Schmidt, H., Werner, M., Mannheim, J., Pichler, B., Schwenzer, N., Beyer, T.

(OP314), Annual Congress of the European Association of Nuclear Medicine (EANM), October 2011 (talk)

Abstract
PURPOSE:Combined PET/MR imaging entails the use of MR contrast agents (MRCA) as part of integrated protocols. MRCA are made up of iron oxide and Gd-chelates for oral and intravenous (iv) application, respectively. We assess additional attenuation of the PET emission signals in the presence of oral and iv MRCA.MATERIALS AND METHODS:Phantom scans were performed on a clinical PET/CT (Biograph HiRez16, Siemens) and an integrated whole-body PET/MR (Biograph mMR, Siemens). Two common MRCA were evaluated: Lumirem (oral) and Gadovist (iv).Reference PET attenuation values were determined on a dedicated small-animal PET (Inveon, Siemens) using equivalent standard PET transmission source imaging (TX). Seven syringes of 5mL were filled with (a) Water, (b) Lumirem_100 (100% concentration), (c) Gadovist_100 (100%), (d) Gadovist_18 (18%), (e) Gadovist_02 (0.2%), (f) Imeron-400 CT iv-contrast (100%) and (g) Imeron-400 (2.4%). The same set of syringes was scanned on CT (Sensation16, Siemens) at 120kVp and 160mAs.The effect of MRCA on the attenuation of PET emission data was evaluated using a 20cm cylinder filled uniformly with [18F]-FDG (FDG) in water (BGD). Three 4.5cm diameter cylinders were inserted into the phantom: (C1) Teflon, (C2) Water+FDG (2:1) and (C3) Lumirem_100+FDG (2:1). Two 50mL syringes filled with Gadovist_02+FDG (Sy1) and water+FDG (Sy2) were attached to the sides of (C1) to mimick the effects of iv-contrast in vessels near bone. Syringe-to-background activity ratio was 4-to-1.PET emission data were acquired for 10min each using the PET/CT and the PET/MR. Images were reconstructed using CT- and MR-based attenuation correction (AC). Since Teflon is not correctly identified on MR, PET(/MR) data were reconstructed using MR-AC and CT-AC.RESULTS:Mean linear PET attenuation (cm-1) on TX was (a) 0.098, (b) 0.098, (c) 0.300, (d) 0.134, (e) 0.095, (f) 0.397 and (g) 0.105. Corresponding CT attenuation (HU) was: (a) 5, (b) 14, (c) 3070, (d) 1040, (e) 13, (f) 3070 and (g) 347.Lumirem had little effect on PET attenuation with (C3) being 13%, 10% and 11% higher than (C2) on PET/CT, PET/MR with MR-AC, and PET/MR with CT-AC, respectively. Gadovist_02 had even smaller effects with (Sy1) being 2.5% lower, 1.2% higher, and 3.5% lower than (Sy2) on PET/CT, PET/MR with MR-AC and PET/MR with CT-AC, respectively.CONCLUSION:MRCA in high and clinically relevant concentrations have attenuation values similar to that of CT contrast and water, respectively. In clinical PET/MR scenarios MRCA are not expected to lead to significant attenuation of the PET emission signals.

ei

Web [BibTex]

Web [BibTex]


no image
Multi-parametric Tumor Characterization and Therapy Monitoring using Simultaneous PET/MRI: initial results for Lung Cancer and GvHD

Sauter, A., Schmidt, H., Gueckel, B., Brendle, C., Bezrukov, I., Mantlik, F., Kolb, A., Mueller, M., Reimold, M., Federmann, B., Hetzel, J., Claussen, C., Pfannenberg, C., Horger, M., Pichler, B., Schwenzer, N.

(T110), 2011 World Molecular Imaging Congress (WMIC), September 2011 (talk)

Abstract
Hybrid imaging modalities such as [18F]FDG-PET/CT are superior in staging of e.g. lung cancer disease compared with stand-alone modalities. Clinical PET/MRI systems are about to enter the field of hybrid imaging and offer potential advantages. One added value could be a deeper insight into the tumor metabolism and tumorigenesis due to the combination of PET and dedicated MR methods such as MRS and DWI. Additionally, therapy monitoring of diffucult to diagnose disease such as chronic sclerodermic GvHD (csGvHD) can potentially be improved by this combination. We have applied PET/MRI in 3 patients with lung cancer and 4 patients with csGvHD before and during therapy. All 3 patients had lung cancer confirmed by histology (2 adenocarcinoma, 1 carcinoid). First, a [18F]FDG-PET/CT was performed with the following parameters: injected dose 351.7±25.1 MBq, uptake time 59.0±2.6 min, 3 min/bed. Subsequently, patients were brought to the PET/MRI imaging facility. The whole-body PET/MRI Biograph mMR system comprises 56 detector cassettes with a 59.4 cm transaxial and 25.8 cm axial FoV. The MRI is a modified Verio system with a magnet bore of 60 cm. The following parameters for PET acquisition were applied: uptake time 121.3±2.3 min, 3 bed positions, 6 min/bed. T1w, T2w, and DWI MR images were recorded simultaneously for each bed. Acquired PET data were reconstructed with an iterative 3D OSEM algorithm using 3 iterations and 21 subsets, Gaussian filter of 3 mm. The 4 patients with GvHD were brought to the brainPET/MRI imaging facility 2:10h-2:28h after tracer injection. A 9 min brainPET-acquisition with simultaneous MRI of the lower extremities was accomplished. MRI examination included T1-weighted (pre and post gadolinium) and T2-weighted sequences. Attenuation correction was calculated based on manual bone segmentation and thresholds for soft tissue, fat and air. Soleus muscle (m), crural fascia (f1) and posterior crural intermuscular septum fascia (f2) were surrounded with ROIs based on the pre-treatment T1-weighted images and coregistered using IRW (Siemens). Fascia-to-muscle ratios for PET (f/m), T1 contrast uptake (T1_post-contrast_f-pre-contrast_f/post-contrast_m-pre-contrast_m) and T2 (T2_f/m) were calculated. Both patients with adenocarcinoma show a lower ADC value compared with the carcinoid patient suggesting a higher cellularity. This is also reflected in FDG-PET with higher SUV values. Our initial results reveal that PET/MRI can provide complementary information for a profound tumor characterization and therapy monitoring. The high soft tissue contrast provided by MRI is valuable for the assessment of the fascial inflammation. While in the first patient FDG and contrast uptake as well as edema, represented by T2 signals, decreased with ongoing therapy, all parameters remained comparatively stable in the second patient. Contrary to expectations, an increase in FDG uptake of patient 3 and 4 was accompanied by an increase of the T2 signals, but a decrease in contrast uptake. These initial results suggest that PET/MRI provides complementary information of the complex disease mechanisms in fibrosing disorders.

ei

Web [BibTex]

Web [BibTex]


no image
Statistical Image Analysis and Percolation Theory

Langovoy, M., Habeck, M., Schölkopf, B.

2011 Joint Statistical Meetings (JSM), August 2011 (talk)

Abstract
We develop a novel method for detection of signals and reconstruction of images in the presence of random noise. The method uses results from percolation theory. We specifically address the problem of detection of multiple objects of unknown shapes in the case of nonparametric noise. The noise density is unknown and can be heavy-tailed. The objects of interest have unknown varying intensities. No boundary shape constraints are imposed on the objects, only a set of weak bulk conditions is required. We view the object detection problem as hypothesis testing for discrete statistical inverse problems. We present an algorithm that allows to detect greyscale objects of various shapes in noisy images. We prove results on consistency and algorithmic complexity of our procedures. Applications to cryo-electron microscopy are presented.

ei

Web [BibTex]

Web [BibTex]


no image
Statistical Learning Theory: Models, Concepts, and Results

von Luxburg, U., Schölkopf, B.

In Handbook of the History of Logic, Vol. 10: Inductive Logic, 10, pages: 651-706, (Editors: Gabbay, D. M., Hartmann, S. and Woods, J. H.), Elsevier North Holland, Amsterdam, Netherlands, May 2011 (inbook)

Abstract
Statistical learning theory provides the theoretical basis for many of today's machine learning algorithms and is arguably one of the most beautifully developed branches of artificial intelligence in general. It originated in Russia in the 1960s and gained wide popularity in the 1990s following the development of the so-called Support Vector Machine (SVM), which has become a standard tool for pattern recognition in a variety of domains ranging from computer vision to computational biology. Providing the basis of new learning algorithms, however, was not the only motivation for developing statistical learning theory. It was just as much a philosophical one, attempting to answer the question of what it is that allows us to draw valid conclusions from empirical data. In this article we attempt to give a gentle, non-technical overview over the key ideas and insights of statistical learning theory. We do not assume that the reader has a deep background in mathematics, statistics, or computer science. Given the nature of the subject matter, however, some familiarity with mathematical concepts and notations and some intuitive understanding of basic probability is required. There exist many excellent references to more technical surveys of the mathematics of statistical learning theory: the monographs by one of the founders of statistical learning theory ([Vapnik, 1995], [Vapnik, 1998]), a brief overview over statistical learning theory in Section 5 of [Sch{\"o}lkopf and Smola, 2002], more technical overview papers such as [Bousquet et al., 2003], [Mendelson, 2003], [Boucheron et al., 2005], [Herbrich and Williamson, 2002], and the monograph [Devroye et al., 1996].

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
PAC-Bayesian Analysis of Martingales and Multiarmed Bandits

Seldin, Y., Laviolette, F., Shawe-Taylor, J., Peters, J., Auer, P.

Max Planck Institute for Biological Cybernetics, Tübingen, Germany, May 2011 (techreport)

Abstract
We present two alternative ways to apply PAC-Bayesian analysis to sequences of dependent random variables. The first is based on a new lemma that enables to bound expectations of convex functions of certain dependent random variables by expectations of the same functions of independent Bernoulli random variables. This lemma provides an alternative tool to Hoeffding-Azuma inequality to bound concentration of martingale values. Our second approach is based on integration of Hoeffding-Azuma inequality with PAC-Bayesian analysis. We also introduce a way to apply PAC-Bayesian analysis in situation of limited feedback. We combine the new tools to derive PAC-Bayesian generalization and regret bounds for the multiarmed bandit problem. Although our regret bound is not yet as tight as state-of-the-art regret bounds based on other well-established techniques, our results significantly expand the range of potential applications of PAC-Bayesian analysis and introduce a new analysis tool to reinforcement learning and many other fields, where martingales and limited feedback are encountered.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Non-stationary Correction of Optical Aberrations

Schuler, C., Hirsch, M., Harmeling, S., Schölkopf, B.

(1), Max Planck Institute for Intelligent Systems, Tübingen, Germany, May 2011 (techreport)

Abstract
Taking a sharp photo at several megapixel resolution traditionally relies on high grade lenses. In this paper, we present an approach to alleviate image degradations caused by imperfect optics. We rely on a calibration step to encode the optical aberrations in a space-variant point spread function and obtain a corrected image by non-stationary deconvolution. By including the Bayer array in our image formation model, we can perform demosaicing as part of the deconvolution.

ei

PDF [BibTex]

PDF [BibTex]


no image
Cooperative Cuts

Jegelka, S.

COSA Workshop: Combinatorial Optimization, Statistics, and Applications, March 2011 (talk)

Abstract
Combinatorial problems with submodular cost functions have recently drawn interest. In a standard combinatorial problem, the sum-of-weights cost is replaced by a submodular set function. The result is a powerful model that is though very hard. In this talk, I will introduce cooperative cuts, minimum cuts with submodular edge weights. I will outline methods to approximately solve this problem, and show an application in computer vision. If time permits, the talk will also sketch regret-minimizing online algorithms for submodular-cost combinatorial problems. This is joint work with Jeff Bilmes (University of Washington).

ei

Web [BibTex]

Web [BibTex]


no image
Multiple Kernel Learning: A Unifying Probabilistic Viewpoint

Nickisch, H., Seeger, M.

Max Planck Institute for Biological Cybernetics, March 2011 (techreport)

Abstract
We present a probabilistic viewpoint to multiple kernel learning unifying well-known regularised risk approaches and recent advances in approximate Bayesian inference relaxations. The framework proposes a general objective function suitable for regression, robust regression and classification that is lower bound of the marginal likelihood and contains many regularised risk approaches as special cases. Furthermore, we derive an efficient and provably convergent optimisation algorithm.

ei

Web [BibTex]

Web [BibTex]


no image
Multiple testing, uncertainty and realistic pictures

Langovoy, M., Wittich, O.

(2011-004), EURANDOM, Technische Universiteit Eindhoven, January 2011 (techreport)

Abstract
We study statistical detection of grayscale objects in noisy images. The object of interest is of unknown shape and has an unknown intensity, that can be varying over the object and can be negative. No boundary shape constraints are imposed on the object, only a weak bulk condition for the object's interior is required. We propose an algorithm that can be used to detect grayscale objects of unknown shapes in the presence of nonparametric noise of unknown level. Our algorithm is based on a nonparametric multiple testing procedure. We establish the limit of applicability of our method via an explicit, closed-form, non-asymptotic and nonparametric consistency bound. This bound is valid for a wide class of nonparametric noise distributions. We achieve this by proving an uncertainty principle for percolation on nite lattices.

ei

PDF [BibTex]

PDF [BibTex]


no image
Robot Learning

Peters, J., Tedrake, R., Roy, N., Morimoto, J.

In Encyclopedia of Machine Learning, pages: 865-869, Encyclopedia of machine learning, (Editors: Sammut, C. and Webb, G. I.), Springer, New York, NY, USA, January 2011 (inbook)

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
What You Expect Is What You Get? Potential Use of Contingent Negative Variation for Passive BCI Systems in Gaze-Based HCI

Ihme, K., Zander, TO.

In Affective Computing and Intelligent Interaction, 6975, pages: 447-456, Lecture Notes in Computer Science, (Editors: D’Mello, S., Graesser, A., Schuller, B. and Martin, J.-C.), Springer, Berlin, Germany, 2011 (inbook)

Abstract
When using eye movements for cursor control in human-computer interaction (HCI), it may be difficult to find an appropriate substitute for the click operation. Most approaches make use of dwell times. However, in this context the so-called Midas-Touch-Problem occurs which means that the system wrongly interprets fixations due to long processing times or spontaneous dwellings of the user as command. Lately it has been shown that brain-computer interface (BCI) input bears good prospects to overcome this problem using imagined hand movements to elicit a selection. The current approach tries to develop this idea further by exploring potential signals for the use in a passive BCI, which would have the advantage that the brain signals used as input are generated automatically without conscious effort of the user. To explore event-related potentials (ERPs) giving information about the user’s intention to select an object, 32-channel electroencephalography (EEG) was recorded from ten participants interacting with a dwell-time-based system. Comparing ERP signals during the dwell time with those occurring during fixations on a neutral cross hair, a sustained negative slow cortical potential at central electrode sites was revealed. This negativity might be a contingent negative variation (CNV) reflecting the participants’ anticipation of the upcoming selection. Offline classification suggests that the CNV is detectable in single trial (mean accuracy 74.9 %). In future, research on the CNV should be accomplished to ensure its stable occurence in human-computer interaction and render possible its use as a potential substitue for the click operation.

ei

DOI [BibTex]

DOI [BibTex]


no image
Kernel Methods in Bioinformatics

Borgwardt, KM.

In Handbook of Statistical Bioinformatics, pages: 317-334, Springer Handbooks of Computational Statistics ; 3, (Editors: Lu, H.H.-S., Schölkopf, B. and Zhao, H.), Springer, Berlin, Germany, 2011 (inbook)

Abstract
Kernel methods have now witnessed more than a decade of increasing popularity in the bioinformatics community. In this article, we will compactly review this development, examining the areas in which kernel methods have contributed to computational biology and describing the reasons for their success.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Cue Combination: Beyond Optimality

Rosas, P., Wichmann, F.

In Sensory Cue Integration, pages: 144-152, (Editors: Trommershäuser, J., Körding, K. and Landy, M. S.), Oxford University Press, 2011 (inbook)

ei

[BibTex]

[BibTex]


no image
Nonconvex proximal splitting: batch and incremental algorithms

Sra, S.

(2), Max Planck Institute for Intelligent Systems, Tübingen, Germany, 2011 (techreport)

Abstract
Within the unmanageably large class of nonconvex optimization, we consider the rich subclass of nonsmooth problems having composite objectives (this includes the extensively studied convex, composite objective problems as a special case). For this subclass, we introduce a powerful, new framework that permits asymptotically non-vanishing perturbations. In particular, we develop perturbation-based batch and incremental (online like) nonconvex proximal splitting algorithms. To our knowledge, this is the rst time that such perturbation-based nonconvex splitting algorithms are being proposed and analyzed. While the main contribution of the paper is the theoretical framework, we complement our results by presenting some empirical results on matrix factorization.

ei

PDF [BibTex]

PDF [BibTex]


no image
Automated Control of AFM Based Nanomanipulation

Xie, H., Onal, C., Régnier, S., Sitti, M.

In Atomic Force Microscopy Based Nanorobotics, pages: 237-311, Springer Berlin Heidelberg, 2011 (incollection)

pi

[BibTex]

[BibTex]


no image
Teleoperation Based AFM Manipulation Control

Xie, H., Onal, C., Régnier, S., Sitti, M.

In Atomic Force Microscopy Based Nanorobotics, pages: 145-235, Springer Berlin Heidelberg, 2011 (incollection)

pi

[BibTex]

[BibTex]


no image
Descriptions and challenges of AFM based nanorobotic systems

Xie, H., Onal, C., Régnier, S., Sitti, M.

In Atomic Force Microscopy Based Nanorobotics, pages: 13-29, Springer Berlin Heidelberg, 2011 (incollection)

pi

[BibTex]

[BibTex]


no image
Tipping the Scales: Guidance and Intrinsically Motivated Behavior

Martius, G., Herrmann, J. M.

In Advances in Artificial Life, ECAL 2011, pages: 506-513, (Editors: Tom Lenaerts and Mario Giacobini and Hugues Bersini and Paul Bourgine and Marco Dorigo and René Doursat), MIT Press, 2011 (incollection)

al

[BibTex]

[BibTex]


Thumb xl andriluka2011
Benchmark datasets for pose estimation and tracking

Andriluka, M., Sigal, L., Black, M. J.

In Visual Analysis of Humans: Looking at People, pages: 253-274, (Editors: Moesland and Hilton and Kr"uger and Sigal), Springer-Verlag, London, 2011 (incollection)

ps

publisher's site Project Page [BibTex]

publisher's site Project Page [BibTex]


no image
Applications of AFM Based Nanorobotic Systems

Xie, H., Onal, C., Régnier, S., Sitti, M.

In Atomic Force Microscopy Based Nanorobotics, pages: 313-342, Springer Berlin Heidelberg, 2011 (incollection)

pi

[BibTex]

[BibTex]


Thumb xl srf2011 2
Steerable random fields for image restoration and inpainting

Roth, S., Black, M. J.

In Markov Random Fields for Vision and Image Processing, pages: 377-387, (Editors: Blake, A. and Kohli, P. and Rother, C.), MIT Press, 2011 (incollection)

Abstract
This chapter introduces the concept of a Steerable Random Field (SRF). In contrast to traditional Markov random field (MRF) models in low-level vision, the random field potentials of a SRF are defined in terms of filter responses that are steered to the local image structure. This steering uses the structure tensor to obtain derivative responses that are either aligned with, or orthogonal to, the predominant local image structure. Analysis of the statistics of these steered filter responses in natural images leads to the model proposed here. Clique potentials are defined over steered filter responses using a Gaussian scale mixture model and are learned from training data. The SRF model connects random fields with anisotropic regularization and provides a statistical motivation for the latter. Steering the random field to the local image structure improves image denoising and inpainting performance compared with traditional pairwise MRFs.

ps

publisher site [BibTex]

publisher site [BibTex]


no image
Nanomechanics of AFM based nanomanipulation

Xie, H., Onal, C., Régnier, S., Sitti, M.

In Atomic Force Microscopy Based Nanorobotics, pages: 87-143, Springer Berlin Heidelberg, 2011 (incollection)

pi

[BibTex]

[BibTex]


no image
Instrumentation Issues of an AFM Based Nanorobotic System

Xie, H., Onal, C., Régnier, S., Sitti, M.

In Atomic Force Microscopy Based Nanorobotics, pages: 31-86, Springer Berlin Heidelberg, 2011 (incollection)

pi

[BibTex]

[BibTex]


no image
Projected Newton-type methods in machine learning

Schmidt, M., Kim, D., Sra, S.

In Optimization for Machine Learning, pages: 305-330, MIT Press, Cambridge, MA, USA, 2011 (incollection)

Abstract
{We consider projected Newton-type methods for solving large-scale optimization problems arising in machine learning and related fields. We first introduce an algorithmic framework for projected Newton-type methods by reviewing a canonical projected (quasi-)Newton method. This method, while conceptually pleasing, has a high computation cost per iteration. Thus, we discuss two variants that are more scalable, namely, two-metric projection and inexact projection methods. Finally, we show how to apply the Newton-type framework to handle non-smooth objectives. Examples are provided throughout the chapter to illustrate machine learning applications of our framework.}

mms

link (url) [BibTex]

link (url) [BibTex]

2010


no image
Computationally efficient algorithms for statistical image processing: Implementation in R

Langovoy, M., Wittich, O.

(2010-053), EURANDOM, Technische Universiteit Eindhoven, December 2010 (techreport)

Abstract
In the series of our earlier papers on the subject, we proposed a novel statistical hy- pothesis testing method for detection of objects in noisy images. The method uses results from percolation theory and random graph theory. We developed algorithms that allowed to detect objects of unknown shapes in the presence of nonparametric noise of unknown level and of un- known distribution. No boundary shape constraints were imposed on the objects, only a weak bulk condition for the object's interior was required. Our algorithms have linear complexity and exponential accuracy. In the present paper, we describe an implementation of our nonparametric hypothesis testing method. We provide a program that can be used for statistical experiments in image processing. This program is written in the statistical programming language R.

ei

PDF [BibTex]

2010


PDF [BibTex]


no image
Fast Convergent Algorithms for Expectation Propagation Approximate Bayesian Inference

Seeger, M., Nickisch, H.

Max Planck Institute for Biological Cybernetics, December 2010 (techreport)

Abstract
We propose a novel algorithm to solve the expectation propagation relaxation of Bayesian inference for continuous-variable graphical models. In contrast to most previous algorithms, our method is provably convergent. By marrying convergent EP ideas from (Opper&Winther 05) with covariance decoupling techniques (Wipf&Nagarajan 08, Nickisch&Seeger 09), it runs at least an order of magnitude faster than the most commonly used EP solver.

ei

Web [BibTex]

Web [BibTex]


no image
Markerless tracking of Dynamic 3D Scans of Faces

Walder, C., Breidt, M., Bülthoff, H., Schölkopf, B., Curio, C.

In Dynamic Faces: Insights from Experiments and Computation, pages: 255-276, (Editors: Curio, C., Bülthoff, H. H. and Giese, M. A.), MIT Press, Cambridge, MA, USA, December 2010 (inbook)

ei

Web [BibTex]

Web [BibTex]


no image
Policy Gradient Methods

Peters, J., Bagnell, J.

In Encyclopedia of Machine Learning, pages: 774-776, (Editors: Sammut, C. and Webb, G. I.), Springer, Berlin, Germany, December 2010 (inbook)

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Comparative Quantitative Evaluation of MR-Based Attenuation Correction Methods in Combined Brain PET/MR

Mantlik, F., Hofmann, M., Bezrukov, I., Kolb, A., Beyer, T., Reimold, M., Pichler, B., Schölkopf, B.

2010(M08-4), 2010 Nuclear Science Symposium and Medical Imaging Conference (NSS-MIC), November 2010 (talk)

Abstract
Combined PET/MR provides at the same time molecular and functional imaging as well as excellent soft tissue contrast. It does not allow one to directly measure the attenuation properties of scanned tissues, despite the fact that accurate attenuation maps are necessary for quantitative PET imaging. Several methods have therefore been proposed for MR-based attenuation correction (MR-AC). So far, they have only been evaluated on data acquired from separate MR and PET scanners. We evaluated several MR-AC methods on data from 10 patients acquired on a combined BrainPET/MR scanner. This allowed the consideration of specific PET/MR issues, such as the RF coil that attenuates and scatters 511 keV gammas. We evaluated simple MR thresholding methods as well as atlas and machine learning-based MR-AC. CT-based AC served as gold standard reference. To comprehensively evaluate the MR-AC accuracy, we used RoIs from 2 anatomic brain atlases with different levels of detail. Visual inspection of the PET images indicated that even the basic FLASH threshold MR-AC may be sufficient for several applications. Using a UTE sequence for bone prediction in MR-based thresholding occasionally led to false prediction of bone tissue inside the brain, causing a significant overestimation of PET activity. Although it yielded a lower mean underestimation of activity, it exhibited the highest variance of all methods. The atlas averaging approach had a smaller mean error, but showed high maximum overestimation on the RoIs of the more detailed atlas. The Nave Bayes and Atlas-Patch MR-AC yielded the smallest variance, and the Atlas-Patch also showed the smallest mean error. In conclusion, Atlas-based AC using only MR information on the BrainPET/MR yields a high level of accuracy that is sufficient for clinical quantitative imaging requirements. The Atlas-Patch approach was superior to alternative atlas-based methods, yielding a quantification error below 10% for all RoIs except very small ones.

ei

[BibTex]

[BibTex]


no image
A PAC-Bayesian Analysis of Graph Clustering and Pairwise Clustering

Seldin, Y.

Max Planck Institute for Biological Cybernetics, Tübingen, Germany, September 2010 (techreport)

Abstract
We formulate weighted graph clustering as a prediction problem: given a subset of edge weights we analyze the ability of graph clustering to predict the remaining edge weights. This formulation enables practical and theoretical comparison of different approaches to graph clustering as well as comparison of graph clustering with other possible ways to model the graph. We adapt the PAC-Bayesian analysis of co-clustering (Seldin and Tishby, 2008; Seldin, 2009) to derive a PAC-Bayesian generalization bound for graph clustering. The bound shows that graph clustering should optimize a trade-off between empirical data fit and the mutual information that clusters preserve on the graph nodes. A similar trade-off derived from information-theoretic considerations was already shown to produce state-of-the-art results in practice (Slonim et al., 2005; Yom-Tov and Slonim, 2009). This paper supports the empirical evidence by providing a better theoretical foundation, suggesting formal generalization guarantees, and offering a more accurate way to deal with finite sample issues. We derive a bound minimization algorithm and show that it provides good results in real-life problems and that the derived PAC-Bayesian bound is reasonably tight.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Sparse nonnegative matrix approximation: new formulations and algorithms

Tandon, R., Sra, S.

(193), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, September 2010 (techreport)

Abstract
We introduce several new formulations for sparse nonnegative matrix approximation. Subsequently, we solve these formulations by developing generic algorithms. Further, to help selecting a particular sparse formulation, we briefly discuss the interpretation of each formulation. Finally, preliminary experiments are presented to illustrate the behavior of our formulations and algorithms.

ei

PDF [BibTex]

PDF [BibTex]


no image
Robust nonparametric detection of objects in noisy images

Langovoy, M., Wittich, O.

(2010-049), EURANDOM, Technische Universiteit Eindhoven, September 2010 (techreport)

Abstract
We propose a novel statistical hypothesis testing method for detection of objects in noisy images. The method uses results from percolation theory and random graph theory. We present an algorithm that allows to detect objects of unknown shapes in the presence of nonparametric noise of unknown level and of unknown distribution. No boundary shape constraints are imposed on the object, only a weak bulk condition for the object's interior is required. The algorithm has linear complexity and exponential accuracy and is appropriate for real-time systems. In this paper, we develop further the mathematical formalism of our method and explore im- portant connections to the mathematical theory of percolation and statistical physics. We prove results on consistency and algorithmic complexity of our testing procedure. In addition, we address not only an asymptotic behavior of the method, but also a nite sample performance of our test.

ei

PDF [BibTex]

PDF [BibTex]


no image
Statistical image analysis and percolation theory

Davies, P., Langovoy, M., Wittich, O.

73rd Annual Meeting of the Institute of Mathematical Statistics (IMS), August 2010 (talk)

Abstract
We develop a novel method for detection of signals and reconstruction of images in the presence of random noise. The method uses results from percolation theory. We specifically address the problem of detection of objects of unknown shapes in the case of nonparametric noise. The noise density is unknown and can be heavy-tailed. We view the object detection problem as hypothesis testing for discrete statistical inverse problems. We present an algorithm that allows to detect objects of various shapes in noisy images. We prove results on consistency and algorithmic complexity of our procedures.

ei

Web [BibTex]

Web [BibTex]


no image
Large Scale Variational Inference and Experimental Design for Sparse Generalized Linear Models

Seeger, M., Nickisch, H.

Max Planck Institute for Biological Cybernetics, August 2010 (techreport)

Abstract
Many problems of low-level computer vision and image processing, such as denoising, deconvolution, tomographic reconstruction or super-resolution, can be addressed by maximizing the posterior distribution of a sparse linear model (SLM). We show how higher-order Bayesian decision-making problems, such as optimizing image acquisition in magnetic resonance scanners, can be addressed by querying the SLM posterior covariance, unrelated to the density's mode. We propose a scalable algorithmic framework, with which SLM posteriors over full, high-resolution images can be approximated for the first time, solving a variational optimization problem which is convex iff posterior mode finding is convex. These methods successfully drive the optimization of sampling trajectories for real-world magnetic resonance imaging through Bayesian experimental design, which has not been attempted before. Our methodology provides new insight into similarities and differences between sparse reconstruction and approximate Bayesian inference, and has important implications for compressive sensing of real-world images.

ei

Web [BibTex]


no image
Cooperative Cuts for Image Segmentation

Jegelka, S., Bilmes, J.

(UWEETR-1020-0003), University of Washington, Washington DC, USA, August 2010 (techreport)

Abstract
We propose a novel framework for graph-based cooperative regularization that uses submodular costs on graph edges. We introduce an efficient iterative algorithm to solve the resulting hard discrete optimization problem, and show that it has a guaranteed approximation factor. The edge-submodular formulation is amenable to the same extensions as standard graph cut approaches, and applicable to a range of problems. We apply this method to the image segmentation problem. Specifically, Here, we apply it to introduce a discount for homogeneous boundaries in binary image segmentation on very difficult images, precisely, long thin objects and color and grayscale images with a shading gradient. The experiments show that significant portions of previously truncated objects are now preserved.

ei

Web [BibTex]

Web [BibTex]


no image
Statistical image analysis and percolation theory

Langovoy, M., Wittich, O.

28th European Meeting of Statisticians (EMS), August 2010 (talk)

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Fast algorithms for total-variationbased optimization

Barbero, A., Sra, S.

(194), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, August 2010 (techreport)

Abstract
We derive a number of methods to solve efficiently simple optimization problems subject to a totalvariation (TV) regularization, under different norms of the TV operator and both for the case of 1-dimensional and 2-dimensional data. In spite of the non-smooth, non-separable nature of the TV terms considered, we show that a dual formulation with strong structure can be derived. Taking advantage of this structure we develop adaptions of existing algorithms from the optimization literature, resulting in efficient methods for the problem at hand. Experimental results show that for 1-dimensional data the proposed methods achieve convergence within good accuracy levels in practically linear time, both for L1 and L2 norms. For the more challenging 2-dimensional case a performance of order O(N2 log2 N) for N x N inputs is achieved when using the L2 norm. A final section suggests possible extensions and lines of further work.

ei

PDF [BibTex]

PDF [BibTex]


no image
Cooperative Cuts: Graph Cuts with Submodular Edge Weights

Jegelka, S., Bilmes, J.

24th European Conference on Operational Research (EURO XXIV), July 2010 (talk)

Abstract
We introduce cooperative cut, a minimum cut problem whose cost is a submodular function on sets of edges: the cost of an edge that is added to a cut set depends on the edges in the set. Applications are e.g. in probabilistic graphical models and image processing. We prove NP hardness and a polynomial lower bound on the approximation factor, and upper bounds via four approximation algorithms based on different techniques. Our additional heuristics have attractive practical properties, e.g., to rely only on standard min-cut. Both our algorithms and heuristics appear to do well in practice.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Solving Large-Scale Nonnegative Least Squares

Sra, S.

16th Conference of the International Linear Algebra Society (ILAS), June 2010 (talk)

Abstract
We study the fundamental problem of nonnegative least squares. This problem was apparently introduced by Lawson and Hanson [1] under the name NNLS. As is evident from its name, NNLS seeks least-squares solutions that are also nonnegative. Owing to its wide-applicability numerous algorithms have been derived for NNLS, beginning from the active-set approach of Lawson and Han- son [1] leading up to the sophisticated interior-point method of Bellavia et al. [2]. We present a new algorithm for NNLS that combines projected subgradients with the non-monotonic gradient descent idea of Barzilai and Borwein [3]. Our resulting algorithm is called BBSG, and we guarantee its convergence by ex- ploiting properties of NNLS in conjunction with projected subgradients. BBSG is surprisingly simple and scales well to large problems. We substantiate our claims by empirically evaluating BBSG and comparing it with established con- vex solvers and specialized NNLS algorithms. The numerical results suggest that BBSG is a practical method for solving large-scale NNLS problems.

ei

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
Gaussian Mixture Modeling with Gaussian Process Latent Variable Models

Nickisch, H., Rasmussen, C.

Max Planck Institute for Biological Cybernetics, June 2010 (techreport)

Abstract
Density modeling is notoriously difficult for high dimensional data. One approach to the problem is to search for a lower dimensional manifold which captures the main characteristics of the data. Recently, the Gaussian Process Latent Variable Model (GPLVM) has successfully been used to find low dimensional manifolds in a variety of complex data. The GPLVM consists of a set of points in a low dimensional latent space, and a stochastic map to the observed space. We show how it can be interpreted as a density model in the observed space. However, the GPLVM is not trained as a density model and therefore yields bad density estimates. We propose a new training strategy and obtain improved generalisation performance and better density estimates in comparative evaluations on several benchmark data sets.

ei

Web [BibTex]

Web [BibTex]


no image
Matrix Approximation Problems

Sra, S.

EU Regional School: Rheinisch-Westf{\"a}lische Technische Hochschule Aachen, May 2010 (talk)

ei

PDF AVI [BibTex]

PDF AVI [BibTex]


no image
BCI2000 and Python

Hill, NJ.

Invited lecture at the 7th International BCI2000 Workshop, Pacific Grove, CA, USA, May 2010 (talk)

Abstract
A tutorial, with exercises, on how to integrate your own Python code with the BCI2000 realtime software package.

ei

PDF [BibTex]

PDF [BibTex]


no image
Extending BCI2000 Functionality with Your Own C++ Code

Hill, NJ.

Invited lecture at the 7th International BCI2000 Workshop, Pacific Grove, CA, USA, May 2010 (talk)

Abstract
A tutorial, with exercises, on how to use BCI2000 C++ framework to write your own real-time signal-processing modules.

ei

[BibTex]

[BibTex]


no image
Generalized Proximity and Projection with Norms and Mixed-norms

Sra, S.

(192), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, May 2010 (techreport)

Abstract
We discuss generalized proximity operators (GPO) and their associated generalized projection problems. On inputs of size n, we show how to efficiently apply GPOs and generalized projections for separable norms and distance-like functions to accuracy e in O(n log(1/e)) time. We also derive projection algorithms that run theoretically in O(n log n log(1/e)) time but can for suitable parameter ranges empirically outperform the O(n log(1/e)) projection method. The proximity and projection tasks are either separable, and solved directly, or are reduced to a single root-finding step. We highlight that as a byproduct, our analysis also yields an O(n log(1/e)) (weakly linear-time) procedure for Euclidean projections onto the l1;1-norm ball; previously only an O(n log n) method was known. We provide empirical evaluation to illustrate the performance of our methods, noting that for the l1;1-norm projection, our implementation is more than two orders of magnitude faster than the previously known method.

ei

PDF [BibTex]

PDF [BibTex]