Header logo is


2019


no image
Semi-supervised learning, causality, and the conditional cluster assumption

von Kügelgen, J., Mey, A., Loog, M., Schölkopf, B.

NeurIPS 2019 Workshop “Do the right thing”: machine learning and causal inference for improved decision making, December 2019 (poster)

ei

link (url) [BibTex]

2019


link (url) [BibTex]


no image
Optimal experimental design via Bayesian optimization: active causal structure learning for Gaussian process networks

von Kügelgen, J., Rubenstein, P. K., Schölkopf, B., Weller, A.

NeurIPS 2019 Workshop “Do the right thing”: machine learning and causal inference for improved decision making, December 2019 (poster) Accepted

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Actively Learning Gaussian Process Dynamics

Buisson-Fenet, M., Solowjow, F., Trimpe, S.

2019 (techreport) Submitted

Abstract
Despite the availability of ever more data enabled through modern sensor and computer technology, it still remains an open problem to learn dynamical systems in a sample-efficient way. We propose active learning strategies that leverage information-theoretical properties arising naturally during Gaussian process regression, while respecting constraints on the sampling process imposed by the system dynamics. Sample points are selected in regions with high uncertainty, leading to exploratory behavior and data-efficient training of the model. All results are verified in an extensive numerical benchmark.

ics

ArXiv [BibTex]


no image
Demo Abstract: Fast Feedback Control and Coordination with Mode Changes for Wireless Cyber-Physical Systems

(Best Demo Award)

Mager, F., Baumann, D., Jacob, R., Thiele, L., Trimpe, S., Zimmerling, M.

Proceedings of the 18th ACM/IEEE Conference on Information Processing in Sensor Networks (IPSN), pages: 340-341, 18th ACM/IEEE Conference on Information Processing in Sensor Networks (IPSN), April 2019 (poster)

ics

arXiv PDF DOI [BibTex]

arXiv PDF DOI [BibTex]


no image
Perception of temporal dependencies in autoregressive motion

Meding, K., Schölkopf, B., Wichmann, F. A.

European Conference on Visual Perception (ECVP), 2019 (poster)

ei

[BibTex]

[BibTex]


no image
Prototyping Micro- and Nano-Optics with Focused Ion Beam Lithography

Keskinbora, K.

SL48, pages: 46, SPIE.Spotlight, SPIE Press, Bellingham, WA, 2019 (book)

mms

DOI [BibTex]

DOI [BibTex]


Event-triggered Learning
Event-triggered Learning

Solowjow, F., Trimpe, S.

2019 (techreport) Submitted

ics

arXiv PDF [BibTex]


no image
Phenomenal Causality and Sensory Realism

Bruijns, S. A., Meding, K., Schölkopf, B., Wichmann, F. A.

European Conference on Visual Perception (ECVP), 2019 (poster)

ei

[BibTex]

[BibTex]

2017


no image
Improving performance of linear field generation with multi-coil setup by optimizing coils position

Aghaeifar, A., Loktyushin, A., Eschelbach, M., Scheffler, K.

Magnetic Resonance Materials in Physics, Biology and Medicine, 30(Supplement 1):S259, 34th Annual Scientific Meeting of the European Society for Magnetic Resonance in Medicine and Biology (ESMRMB), October 2017 (poster)

ei

link (url) DOI [BibTex]

2017


link (url) DOI [BibTex]


no image
Estimating B0 inhomogeneities with projection FID navigator readouts

Loktyushin, A., Ehses, P., Schölkopf, B., Scheffler, K.

25th Annual Meeting and Exhibition of the International Society for Magnetic Resonance in Medicine (ISMRM), April 2017 (poster)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Image Quality Improvement by Applying Retrospective Motion Correction on Quantitative Susceptibility Mapping and R2*

Feng, X., Loktyushin, A., Deistung, A., Reichenbach, J.

25th Annual Meeting and Exhibition of the International Society for Magnetic Resonance in Medicine (ISMRM), April 2017 (poster)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Elements of Causal Inference - Foundations and Learning Algorithms

Peters, J., Janzing, D., Schölkopf, B.

Adaptive Computation and Machine Learning Series, The MIT Press, Cambridge, MA, USA, 2017 (book)

ei

PDF [BibTex]

PDF [BibTex]


Mobile Microrobotics
Mobile Microrobotics

Sitti, M.

Mobile Microrobotics, The MIT Press, Cambridge, MA, 2017 (book)

Abstract
Progress in micro- and nano-scale science and technology has created a demand for new microsystems for high-impact applications in healthcare, biotechnology, manufacturing, and mobile sensor networks. The new robotics field of microrobotics has emerged to extend our interactions and explorations to sub-millimeter scales. This is the first textbook on micron-scale mobile robotics, introducing the fundamentals of design, analysis, fabrication, and control, and drawing on case studies of existing approaches. The book covers the scaling laws that can be used to determine the dominant forces and effects at the micron scale; models forces acting on microrobots, including surface forces, friction, and viscous drag; and describes such possible microfabrication techniques as photo-lithography, bulk micromachining, and deep reactive ion etching. It presents on-board and remote sensing methods, noting that remote sensors are currently more feasible; studies possible on-board microactuators; discusses self-propulsion methods that use self-generated local gradients and fields or biological cells in liquid environments; and describes remote microrobot actuation methods for use in limited spaces such as inside the human body. It covers possible on-board powering methods, indispensable in future medical and other applications; locomotion methods for robots on surfaces, in liquids, in air, and on fluid-air interfaces; and the challenges of microrobot localization and control, in particular multi-robot control methods for magnetic microrobots. Finally, the book addresses current and future applications, including noninvasive medical diagnosis and treatment, environmental remediation, and scientific tools.

pi

Mobile Microrobotics By Metin Sitti - Chapter 1 (PDF) link (url) [BibTex]

Mobile Microrobotics By Metin Sitti - Chapter 1 (PDF) link (url) [BibTex]


no image
New Directions for Learning with Kernels and Gaussian Processes (Dagstuhl Seminar 16481)

Gretton, A., Hennig, P., Rasmussen, C., Schölkopf, B.

Dagstuhl Reports, 6(11):142-167, 2017 (book)

ei pn

DOI [BibTex]

DOI [BibTex]


Design of a visualization scheme for functional connectivity data of Human Brain
Design of a visualization scheme for functional connectivity data of Human Brain

Bramlage, L.

Hochschule Osnabrück - University of Applied Sciences, 2017 (thesis)

sf

Bramlage_BSc_2017.pdf [BibTex]


no image
Generalized phase locking analysis of electrophysiology data

Safavi, S., Panagiotaropoulos, T., Kapoor, V., Logothetis, N. K., Besserve, M.

ESI Systems Neuroscience Conference (ESI-SyNC 2017): Principles of Structural and Functional Connectivity, 2017 (poster)

ei

[BibTex]

[BibTex]

2013


Puppet Flow
Puppet Flow

Zuffi, S., Black, M. J.

(7), Max Planck Institute for Intelligent Systems, October 2013 (techreport)

Abstract
We introduce Puppet Flow (PF), a layered model describing the optical flow of a person in a video sequence. We consider video frames composed by two layers: a foreground layer corresponding to a person, and background. We model the background as an affine flow field. The foreground layer, being a moving person, requires reasoning about the articulated nature of the human body. We thus represent the foreground layer with the Deformable Structures model (DS), a parametrized 2D part-based human body representation. We call the motion field defined through articulated motion and deformation of the DS model, a Puppet Flow. By exploiting the DS representation, Puppet Flow is a parametrized optical flow field, where parameters are the person's pose, gender and body shape.

ps

pdf Project Page Project Page [BibTex]

2013


pdf Project Page Project Page [BibTex]


no image
Studying large-scale brain networks: electrical stimulation and neural-event-triggered fMRI

Logothetis, N., Eschenko, O., Murayama, Y., Augath, M., Steudel, T., Evrard, H., Besserve, M., Oeltermann, A.

Twenty-Second Annual Computational Neuroscience Meeting (CNS*2013), July 2013, journal = {BMC Neuroscience}, year = {2013}, month = {7}, volume = {14}, number = {Supplement 1}, pages = {A1}, (talk)

ei

Web [BibTex]

Web [BibTex]


Learning and Optimization with Submodular Functions
Learning and Optimization with Submodular Functions

Sankaran, B., Ghazvininejad, M., He, X., Kale, D., Cohen, L.

ArXiv, May 2013 (techreport)

Abstract
In many naturally occurring optimization problems one needs to ensure that the definition of the optimization problem lends itself to solutions that are tractable to compute. In cases where exact solutions cannot be computed tractably, it is beneficial to have strong guarantees on the tractable approximate solutions. In order operate under these criterion most optimization problems are cast under the umbrella of convexity or submodularity. In this report we will study design and optimization over a common class of functions called submodular functions. Set functions, and specifically submodular set functions, characterize a wide variety of naturally occurring optimization problems, and the property of submodularity of set functions has deep theoretical consequences with wide ranging applications. Informally, the property of submodularity of set functions concerns the intuitive principle of diminishing returns. This property states that adding an element to a smaller set has more value than adding it to a larger set. Common examples of submodular monotone functions are entropies, concave functions of cardinality, and matroid rank functions; non-monotone examples include graph cuts, network flows, and mutual information. In this paper we will review the formal definition of submodularity; the optimization of submodular functions, both maximization and minimization; and finally discuss some applications in relation to learning and reasoning using submodular functions.

am

arxiv link (url) [BibTex]

arxiv link (url) [BibTex]


A Quantitative Analysis of Current Practices in Optical Flow Estimation and the Principles Behind Them
A Quantitative Analysis of Current Practices in Optical Flow Estimation and the Principles Behind Them

Sun, D., Roth, S., Black, M. J.

(CS-10-03), Brown University, Department of Computer Science, January 2013 (techreport)

ps

pdf [BibTex]

pdf [BibTex]


no image
Coupling between spiking activity and beta band spatio-temporal patterns in the macaque PFC

Safavi, S., Panagiotaropoulos, T., Kapoor, V., Logothetis, N., Besserve, M.

43rd Annual Meeting of the Society for Neuroscience (Neuroscience), 2013 (poster)

ei

[BibTex]

[BibTex]


no image
Gaussian Process Vine Copulas for Multivariate Dependence

Lopez-Paz, D., Hernandez-Lobato, J., Ghahramani, Z.

International Conference on Machine Learning (ICML), 2013 (poster)

ei

PDF [BibTex]

PDF [BibTex]


no image
Domain Generalization via Invariant Feature Representation

Muandet, K., Balduzzi, D., Schölkopf, B.

30th International Conference on Machine Learning (ICML2013), 2013 (poster)

ei

PDF [BibTex]

PDF [BibTex]


no image
Analyzing locking of spikes to spatio-temporal patterns in the macaque prefrontal cortex

Safavi, S., Panagiotaropoulos, T., Kapoor, V., Logothetis, N., Besserve, M.

Bernstein Conference, 2013 (poster)

ei

DOI [BibTex]

DOI [BibTex]


no image
One-class Support Measure Machines for Group Anomaly Detection

Muandet, K., Schölkopf, B.

29th Conference on Uncertainty in Artificial Intelligence (UAI), 2013 (poster)

ei

PDF [BibTex]

PDF [BibTex]


no image
The Randomized Dependence Coefficient

Lopez-Paz, D., Hennig, P., Schölkopf, B.

Neural Information Processing Systems (NIPS), 2013 (poster)

ei pn

PDF [BibTex]

PDF [BibTex]


no image
Characterization of different types of sharp-wave ripple signatures in the CA1 of the macaque hippocampus

Ramirez-Villegas, J., Logothetis, N., Besserve, M.

4th German Neurophysiology PhD Meeting Networks, 2013 (poster)

ei

Web [BibTex]

Web [BibTex]


no image
Animating Samples from Gaussian Distributions

Hennig, P.

(8), Max Planck Institute for Intelligent Systems, Tübingen, Germany, 2013 (techreport)

ei pn

PDF [BibTex]

PDF [BibTex]


no image
Domain Generalization via Invariant Feature Representation

Muandet, K.

30th International Conference on Machine Learning (ICML2013), 2013 (talk)

ei

PDF [BibTex]

PDF [BibTex]


no image
Maximizing Kepler science return per telemetered pixel: Detailed models of the focal plane in the two-wheel era

Hogg, D. W., Angus, R., Barclay, T., Dawson, R., Fergus, R., Foreman-Mackey, D., Harmeling, S., Hirsch, M., Lang, D., Montet, B. T., Schiminovich, D., Schölkopf, B.

arXiv:1309.0653, 2013 (techreport)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Maximizing Kepler science return per telemetered pixel: Searching the habitable zones of the brightest stars

Montet, B. T., Angus, R., Barclay, T., Dawson, R., Fergus, R., Foreman-Mackey, D., Harmeling, S., Hirsch, M., Hogg, D. W., Lang, D., Schiminovich, D., Schölkopf, B.

arXiv:1309.0654, 2013 (techreport)

ei

link (url) [BibTex]

link (url) [BibTex]

2006


no image
Some observations on the pedestal effect or dipper function

Henning, B., Wichmann, F.

Journal of Vision, 6(13):50, 2006 Fall Vision Meeting of the Optical Society of America, December 2006 (poster)

Abstract
The pedestal effect is the large improvement in the detectabilty of a sinusoidal “signal” grating observed when the signal is added to a masking or “pedestal” grating of the same spatial frequency, orientation, and phase. We measured the pedestal effect in both broadband and notched noise - noise from which a 1.5-octave band centred on the signal frequency had been removed. Although the pedestal effect persists in broadband noise, it almost disappears in the notched noise. Furthermore, the pedestal effect is substantial when either high- or low-pass masking noise is used. We conclude that the pedestal effect in the absence of notched noise results principally from the use of information derived from channels with peak sensitivities at spatial frequencies different from that of the signal and pedestal. The spatial-frequency components of the notched noise above and below the spatial frequency of the signal and pedestal prevent the use of information about changes in contrast carried in channels tuned to spatial frequencies that are very much different from that of the signal and pedestal. Thus the pedestal or dipper effect measured without notched noise is not a characteristic of individual spatial-frequency tuned channels.

ei

Web DOI [BibTex]

2006


Web DOI [BibTex]


no image
A Kernel Method for the Two-Sample-Problem

Gretton, A., Borgwardt, K., Rasch, M., Schölkopf, B., Smola, A.

20th Annual Conference on Neural Information Processing Systems (NIPS), December 2006 (talk)

Abstract
We propose two statistical tests to determine if two samples are from different distributions. Our test statistic is in both cases the distance between the means of the two samples mapped into a reproducing kernel Hilbert space (RKHS). The first test is based on a large deviation bound for the test statistic, while the second is based on the asymptotic distribution of this statistic. We show that the test statistic can be computed in $O(m^2)$ time. We apply our approach to a variety of problems, including attribute matching for databases using the Hungarian marriage method, where our test performs strongly. We also demonstrate excellent performance when comparing distributions over graphs, for which no alternative tests currently exist.

ei

PDF [BibTex]

PDF [BibTex]


no image
Ab-initio gene finding using machine learning

Schweikert, G., Zeller, G., Zien, A., Ong, C., de Bona, F., Sonnenburg, S., Phillips, P., Rätsch, G.

NIPS Workshop on New Problems and Methods in Computational Biology, December 2006 (talk)

ei

Web [BibTex]

Web [BibTex]


no image
Graph boosting for molecular QSAR analysis

Saigo, H., Kadowaki, T., Kudo, T., Tsuda, K.

NIPS Workshop on New Problems and Methods in Computational Biology, December 2006 (talk)

Abstract
We propose a new boosting method that systematically combines graph mining and mathematical programming-based machine learning. Informative and interpretable subgraph features are greedily found by a series of graph mining calls. Due to our mathematical programming formulation, subgraph features and pre-calculated real-valued features are seemlessly integrated. We tested our algorithm on a quantitative structure-activity relationship (QSAR) problem, which is basically a regression problem when given a set of chemical compounds. In benchmark experiments, the prediction accuracy of our method favorably compared with the best results reported on each dataset.

ei

Web [BibTex]

Web [BibTex]


no image
Inferring Causal Directions by Evaluating the Complexity of Conditional Distributions

Sun, X., Janzing, D., Schölkopf, B.

NIPS Workshop on Causality and Feature Selection, December 2006 (talk)

Abstract
We propose a new approach to infer the causal structure that has generated the observed statistical dependences among n random variables. The idea is that the factorization of the joint measure of cause and effect into P(cause)P(effect|cause) leads typically to simpler conditionals than non-causal factorizations. To evaluate the complexity of the conditionals we have tried two methods. First, we have compared them to those which maximize the conditional entropy subject to the observed first and second moments since we consider the latter as the simplest conditionals. Second, we have fitted the data with conditional probability measures being exponents of functions in an RKHS space and defined the complexity by a Hilbert-space semi-norm. Such a complexity measure has several properties that are useful for our purpose. We describe some encouraging results with both methods applied to real-world data. Moreover, we have combined constraint-based approaches to causal discovery (i.e., methods using only information on conditional statistical dependences) with our method in order to distinguish between causal hypotheses which are equivalent with respect to the imposed independences. Furthermore, we compare the performance to Bayesian approaches to causal inference.

ei

Web [BibTex]


no image
Minimal Logical Constraint Covering Sets

Sinz, F., Schölkopf, B.

(155), Max Planck Institute for Biological Cybernetics, Tübingen, December 2006 (techreport)

Abstract
We propose a general framework for computing minimal set covers under class of certain logical constraints. The underlying idea is to transform the problem into a mathematical programm under linear constraints. In this sense it can be seen as a natural extension of the vector quantization algorithm proposed by Tipping and Schoelkopf. We show which class of logical constraints can be cast and relaxed into linear constraints and give an algorithm for the transformation.

ei

PDF [BibTex]

PDF [BibTex]


no image
Learning Optimal EEG Features Across Time, Frequency and Space

Farquhar, J., Hill, J., Schölkopf, B.

NIPS Workshop on Current Trends in Brain-Computer Interfacing, December 2006 (talk)

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Semi-Supervised Learning

Zien, A.

Advanced Methods in Sequence Analysis Lectures, November 2006 (talk)

ei

Web [BibTex]

Web [BibTex]


no image
New Methods for the P300 Visual Speller

Biessmann, F.

(1), (Editors: Hill, J. ), Max-Planck Institute for Biological Cybernetics, Tübingen, Germany, November 2006 (techreport)

ei

PDF [BibTex]

PDF [BibTex]


no image
Optimizing Spatial Filters for BCI: Margin- and Evidence-Maximization Approaches

Farquhar, J., Hill, N., Schölkopf, B.

Challenging Brain-Computer Interfaces: MAIA Workshop 2006, pages: 1, November 2006 (poster)

Abstract
We present easy-to-use alternatives to the often-used two-stage Common Spatial Pattern + classifier approach for spatial filtering and classification of Event-Related Desychnronization signals in BCI. We report two algorithms that aim to optimize the spatial filters according to a criterion more directly related to the ability of the algorithms to generalize to unseen data. Both are based upon the idea of treating the spatial filter coefficients as hyperparameters of a kernel or covariance function. We then optimize these hyper-parameters directly along side the normal classifier parameters with respect to our chosen learning objective function. The two objectives considered are margin maximization as used in Support-Vector Machines and the evidence maximization framework used in Gaussian Processes. Our experiments assessed generalization error as a function of the number of training points used, on 9 BCI competition data sets and 5 offline motor imagery data sets measured in Tubingen. Both our approaches sho w consistent improvements relative to the commonly used CSP+linear classifier combination. Strikingly, the improvement is most significant in the higher noise cases, when either few trails are used for training, or with the most poorly performing subjects. This a reversal of the usual "rich get richer" effect in the development of CSP extensions, which tend to perform best when the signal is strong enough to accurately find their additional parameters. This makes our approach particularly suitable for clinical application where high levels of noise are to be expected.

ei

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
A Machine Learning Approach for Determining the PET Attenuation Map from Magnetic Resonance Images

Hofmann, M., Steinke, F., Judenhofer, M., Claussen, C., Schölkopf, B., Pichler, B.

IEEE Medical Imaging Conference, November 2006 (talk)

Abstract
A promising new combination in multimodality imaging is MR-PET, where the high soft tissue contrast of Magnetic Resonance Imaging (MRI) and the functional information of Positron Emission Tomography (PET) are combined. Although many technical problems have recently been solved, it is still an open problem to determine the attenuation map from the available MR scan, as the MR intensities are not directly related to the attenuation values. One standard approach is an atlas registration where the atlas MR image is aligned with the patient MR thus also yielding an attenuation image for the patient. We also propose another approach, which to our knowledge has not been tried before: Using Support Vector Machines we predict the attenuation value directly from the local image information. We train this well-established machine learning algorithm using small image patches. Although both approaches sometimes yielded acceptable results, they also showed their specific shortcomings: The registration often fails with large deformations whereas the prediction approach is problematic when the local image structure is not characteristic enough. However, the failures often do not coincide and integration of both information sources is promising. We therefore developed a combination method extending Support Vector Machines to use not only local image structure but also atlas registered coordinates. We demonstrate the strength of this combination approach on a number of examples.

ei

[BibTex]

[BibTex]


no image
Geometric Analysis of Hilbert Schmidt Independence criterion based ICA contrast function

Shen, H., Jegelka, S., Gretton, A.

(PA006080), National ICT Australia, Canberra, Australia, October 2006 (techreport)

ei

Web [BibTex]

Web [BibTex]


no image
Semi-Supervised Support Vector Machines and Application to Spam Filtering

Zien, A.

ECML Discovery Challenge Workshop, September 2006 (talk)

Abstract
After introducing the semi-supervised support vector machine (aka TSVM for "transductive SVM"), a few popular training strategies are briefly presented. Then the assumptions underlying semi-supervised learning are reviewed. Finally, two modern TSVM optimization techniques are applied to the spam filtering data sets of the workshop; it is shown that they can achieve excellent results, if the problem of the data being non-iid can be handled properly.

ei

PDF Web [BibTex]


no image
Learning Eye Movements

Kienzle, W., Wichmann, F., Schölkopf, B., Franz, M.

Sensory Coding And The Natural Environment, 2006, pages: 1, September 2006 (poster)

Abstract
The human visual system samples images through saccadic eye movements which rapidly change the point of fixation. Although the selection of eye movement targets depends on numerous top-down mechanisms, a number of recent studies have shown that low-level image features such as local contrast or edges play an important role. These studies typically used predefined image features which were afterwards experimentally verified. Here, we follow a complementary approach: instead of testing a set of candidate image features, we infer these hypotheses from the data, using methods from statistical learning. To this end, we train a non-linear classifier on fixated vs. randomly selected image patches without making any physiological assumptions. The resulting classifier can be essentially characterized by a nonlinear combination of two center-surround receptive fields. We find that the prediction performance of this simple model on our eye movement data is indistinguishable from the physiologically motivated model of Itti & Koch (2000) which is far more complex. In particular, we obtain a comparable performance without using any multi-scale representations, long-range interactions or oriented image features.

ei

Web [BibTex]

Web [BibTex]