Header logo is


2014


no image
Policy Evaluation with Temporal Differences: A Survey and Comparison

Dann, C., Neumann, G., Peters, J.

Journal of Machine Learning Research, 15, pages: 809-883, 2014 (article)

ei

PDF [BibTex]

2014


PDF [BibTex]


no image
Uncovering the Structure and Temporal Dynamics of Information Propagation

Gomez Rodriguez, M., Leskovec, J., Balduzzi, D., Schölkopf, B.

Network Science, 2(1):26-65, 2014 (article)

Abstract
Time plays an essential role in the diffusion of information, influence, and disease over networks. In many cases we can only observe when a node is activated by a contagion—when a node learns about a piece of information, makes a decision, adopts a new behavior, or becomes infected with a disease. However, the underlying network connectivity and transmission rates between nodes are unknown. Inferring the underlying diffusion dynamics is important because it leads to new insights and enables forecasting, as well as influencing or containing information propagation. In this paper we model diffusion as a continuous temporal process occurring at different rates over a latent, unobserved network that may change over time. Given information diffusion data, we infer the edges and dynamics of the underlying network. Our model naturally imposes sparse solutions and requires no parameter tuning. We develop an efficient inference algorithm that uses stochastic convex optimization to compute online estimates of the edges and transmission rates. We evaluate our method by tracking information diffusion among 3.3 million mainstream media sites and blogs, and experiment with more than 179 million different instances of information spreading over the network in a one-year period. We apply our network inference algorithm to the top 5,000 media sites and blogs and report several interesting observations. First, information pathways for general recurrent topics are more stable across time than for on-going news events. Second, clusters of news media sites and blogs often emerge and vanish in a matter of days for on-going news events. Finally, major events, for example, large scale civil unrest as in the Libyan civil war or Syrian uprising, increase the number of information pathways among blogs, and also increase the network centrality of blogs and social media sites.

ei

DOI [BibTex]


no image
Causal discovery via reproducing kernel Hilbert space embeddings

Chen, Z., Zhang, K., Chan, L., Schölkopf, B.

Neural Computation, 26(7):1484-1517, 2014 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Impact of Large-Scale Climate Extremes on Biospheric Carbon Fluxes: An Intercomparison Based on MsTMIP Data

Zscheischler, J., Michalak, A., Schwalm, M., Mahecha, M., Huntzinger, D., Reichstein, M., Berthier, G., Ciais, P., Cook, R., El-Masri, B., Huang, M., Ito, A., Jain, A., King, A., Lei, H., Lu, C., Mao, J., Peng, S., Poulter, B., Ricciuto, D., Shi, X., Tao, B., Tian, H., Viovy, N., Wang, W., Wei, Y., Yang, J., Zeng, N.

Global Biogeochemical Cycles, 2014 (article)

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
A Brain-Computer Interface Based on Self-Regulation of Gamma-Oscillations in the Superior Parietal Cortex

Grosse-Wentrup, M., Schölkopf, B.

Journal of Neural Engineering, 11(5):056015, 2014 (article)

Abstract
Objective. Brain–computer interface (BCI) systems are often based on motor- and/or sensory processes that are known to be impaired in late stages of amyotrophic lateral sclerosis (ALS). We propose a novel BCI designed for patients in late stages of ALS that only requires high-level cognitive processes to transmit information from the user to the BCI. Approach. We trained subjects via EEG-based neurofeedback to self-regulate the amplitude of gamma-oscillations in the superior parietal cortex (SPC). We argue that parietal gamma-oscillations are likely to be associated with high-level attentional processes, thereby providing a communication channel that does not rely on the integrity of sensory- and/or motor-pathways impaired in late stages of ALS. Main results. Healthy subjects quickly learned to self-regulate gamma-power in the SPC by alternating between states of focused attention and relaxed wakefulness, resulting in an average decoding accuracy of 70.2%. One locked-in ALS patient (ALS-FRS-R score of zero) achieved an average decoding accuracy significantly above chance-level though insufficient for communication (55.8%). Significance. Self-regulation of gamma-power in the SPC is a feasible paradigm for brain–computer interfacing and may be preserved in late stages of ALS. This provides a novel approach to testing whether completely locked-in ALS patients retain the capacity for goal-directed thinking.

ei

Web DOI [BibTex]


no image
CAM: Causal Additive Models, high-dimensional order search and penalized regression

Bühlmann, P., Peters, J., Ernest, J.

Annals of Statistics, 42(6):2526-2556, 2014 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Predicting Motor Learning Performance from Electroencephalographic Data

Meyer, T., Peters, J., Zander, T., Schölkopf, B., Grosse-Wentrup, M.

Journal of NeuroEngineering and Rehabilitation, 11:24, 2014 (article)

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Special issue on autonomous grasping and manipulation

Ben Amor, H., Saxena, A., Hudson, N., Peters, J.

Autonomous Robots, 36(1-2):1-3, 2014 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Evaluation of Positron Emission Tomographic Tracers for Imaging of Papillomavirus-Induced Tumors in Rabbits

Probst, S., Wiehr, S., Mantlik, F., Schmidt, H., Kolb, A., Münch, P., Delcuratolo, M., Stubenrauch, F., Pichler, B., Iftner, T.

Molecular Imaging, 13(1):1536-0121, 2014 (article)

ei

Web [BibTex]

Web [BibTex]


no image
Extreme events in gross primary production: a characterization across continents

Zscheischler, J., Reichstein, M., Harmeling, S., Rammig, A., Tomelleri, E., Mahecha, M.

Biogeosciences, 11, pages: 2909-2924, 2014 (article)

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
On power law distributions in large-scale taxonomies

Babbar, R., Metzig, C., Partalas, I., Gaussier, E., Amini, M.

SIGKDD Explorations, Special Issue on Big Data, 16(1):47-56, 2014 (article)

ei

[BibTex]

[BibTex]


no image
Indirect Robot Model Learning for Tracking Control

Bocsi, B., Csató, L., Peters, J.

Advanced Robotics, 28(9):589-599, 2014 (article)

ei

PDF DOI [BibTex]


no image
An extended approach for spatiotemporal gapfilling: dealing with large and systematic gaps in geoscientific datasets

v Buttlar, J., Zscheischler, J., Mahecha, M.

Nonlinear Processes in Geophysics, 21(1):203-215, 2014 (article)

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
On the Quantification Accuracy, Homogeneity, and Stability of Simultaneous Positron Emission Tomography/Magnetic Resonance Imaging Systems

Schmidt, H., Schwenzer, N., Bezrukov, I., Mantlik, F., Kolb, A., Kupferschläger, J., Pichler, B.

Investigative Radiology, 49(6):373-381, 2014 (article)

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Natural Evolution Strategies

Wierstra, D., Schaul, T., Glasmachers, T., Sun, Y., Peters, J., Schmidhuber, J.

Journal of Machine Learning Research, 15, pages: 949-980, 2014 (article)

ei

PDF [BibTex]

PDF [BibTex]


no image
Factors controlling decomposition rates of fine root litter in temperate forests and grasslands

Solly, E., Schöning, I., Boch, S., Kandeler, E., Marhan, S., Michalzik, B., Müller, J., Zscheischler, J., Trumbore, S., Schrumpf, M.

Plant and Soil, 2014 (article)

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Causal Discovery with Continuous Additive Noise Models

Peters, J., Mooij, J., Janzing, D., Schölkopf, B.

Journal of Machine Learning Research, 15, pages: 2009-2053, 2014 (article)

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
A few extreme events dominate global interannual variability in gross primary production

Zscheischler, J., Mahecha, M., v Buttlar, J., Harmeling, S., Jung, M., Rammig, A., Randerson, J., Schölkopf, B., Seneviratne, S., Tomelleri, E., Zaehle, S., Reichstein, M.

Environmental Research Letters, 9(3):035001, 2014 (article)

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Kernel methods in system identification, machine learning and function estimation: A survey

Pillonetto, G., Dinuzzo, F., Chen, T., De Nicolao, G., Ljung, L.

Automatica, 50(3):657-682, 2014 (article)

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Development of a novel depth of interaction PET detector using highly multiplexed G-APD cross-strip encoding

Kolb, A., Parl, C., Mantlik, F., Liu, C., Lorenz, E., Renker, D., Pichler, B.

Medical Physics, 41(8), 2014 (article)

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Epidural electrocorticography for monitoring of arousal in locked-in state

Martens, S., Bensch, M., Halder, S., Hill, J., Nijboer, F., Ramos-Murguialday, A., Schölkopf, B., Birbaumer, N., Gharabaghi, A.

Frontiers in Human Neuroscience, 8(861), 2014 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Simultaneous Whole-Body PET/MR Imaging in Comparison to PET/CT in Pediatric Oncology: Initial Results

Schäfer, J. F., Gatidis, S., Schmidt, H., Gückel, B., Bezrukov, I., Pfannenberg, C. A., Reimold, M., M., E., Fuchs, J., Claussen, C. D., Schwenzer, N. F.

Radiology, 273(1):220-231, 2014 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
A Limiting Property of the Matrix Exponential

Trimpe, S., D’Andrea, R.

IEEE Transactions on Automatic Control, 59(4):1105-1110, 2014 (article)

am ics

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Cost-Sensitive Active Learning With Lookahead: Optimizing Field Surveys for Remote Sensing Data Classification

Persello, C., Boularias, A., Dalponte, M., Gobakken, T., Naesset, E., Schölkopf, B.

IEEE Transactions on Geoscience and Remote Sensing, 10(52):6652 - 6664, 2014 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Principles of PET/MR Imaging

Disselhorst, J. A., Bezrukov, I., Kolb, A., Parl, C., Pichler, B. J.

Journal of Nuclear Medicine, 55(6, Supplement 2):2S-10S, 2014 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
IM3SHAPE: Maximum likelihood galaxy shear measurement code for cosmic gravitational lensing

Zuntz, J., Kacprzak, T., Voigt, L., Hirsch, M., Rowe, B., Bridle, S.

Astrophysics Source Code Library, 1, pages: 09013, 2014 (article)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Event-Based State Estimation With Variance-Based Triggering

Trimpe, S., D’Andrea, R.

IEEE Transactions on Automatic Control, 59(12):3266-3281, 2014 (article)

am ics

PDF Supplementary material DOI Project Page [BibTex]

PDF Supplementary material DOI Project Page [BibTex]


no image
Efficient nearest neighbors via robust sparse hashing

Cherian, A., Sra, S., Morellas, V., Papanikolopoulos, N.

IEEE Transactions on Image Processing, 23(8):3646-3655, 2014 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Sérsic galaxy models in weak lensing shape measurement: model bias, noise bias and their interaction

Kacprzak, T., Bridle, S., Rowe, B., Voigt, L., Zuntz, J., Hirsch, M., MacCrann, N.

Monthly Notices of the Royal Astronomical Society, 441(3):2528-2538, Oxford University Press, 2014 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Perspective: Intelligent Systems: Bits and Bots

Spatz, J. P., Schaal, S.

Nature, (509), 2014, clmc (article)

Abstract
What is intelligence, and can we create it? Animals can perceive, reason, react and learn, but they are just one example of an intelligent system. Intelligent systems could be robots as large as humans, helping with search-and- rescue operations in dangerous places, or smart devices as tiny as a cell, delivering drugs to a target within the body. Even computing systems can be intelligent, by perceiving the world, crawling the web and processing â??big dataâ?? to extract and learn from complex information.Understanding not only how intelligence can be reproduced, but also how to build systems that put these ideas into practice, will be a challenge. Small intelligent systems will require new materials and fabrication methods, as well as com- pact information processors and power sources. And for nano-sized systems, the rules change altogether. The laws of physics operate very differently at tiny scales: for a nanorobot, swimming through water is like struggling through treacle.Researchers at the Max Planck Institute for Intelligent Systems have begun to solve these problems by developing new computational methods, experiment- ing with unique robotic systems and fabricating tiny, artificial propellers, like bacterial flagella, to propel nanocreations through their environment.

am

PDF link (url) [BibTex]

PDF link (url) [BibTex]


no image
Diminished White Matter Integrity in Patients with Systemic Lupus Erythematosus

Schmidt-Wilcke, T., Cagnoli, P., Wang, P., Schultz, T., Lotz, A., Mccune, W. J., Sundgren, P. C.

NeuroImage: Clinical, 5, pages: 291-297, 2014 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Information-Theoretic Bounded Rationality and ϵ-Optimality

Braun, DA, Ortega, PA

Entropy, 16(8):4662-4676, August 2014 (article)

Abstract
Bounded rationality concerns the study of decision makers with limited information processing resources. Previously, the free energy difference functional has been suggested to model bounded rational decision making, as it provides a natural trade-off between an energy or utility function that is to be optimized and information processing costs that are measured by entropic search costs. The main question of this article is how the information-theoretic free energy model relates to simple \(\epsilon\)-optimality models of bounded rational decision making, where the decision maker is satisfied with any action in an \(\epsilon\)-neighborhood of the optimal utility. We find that the stochastic policies that optimize the free energy trade-off comply with the notion of \(\epsilon\)-optimality. Moreover, this optimality criterion even holds when the environment is adversarial. We conclude that the study of bounded rationality based on \(\epsilon\)-optimality criteria that abstract away from the particulars of the information processing constraints is compatible with the information-theoretic free energy model of bounded rationality.

ei

DOI [BibTex]

DOI [BibTex]


no image
An autonomous manipulation system based on force control and optimization

Righetti, L., Kalakrishnan, M., Pastor, P., Binney, J., Kelly, J., Voorhies, R. C., Sukhatme, G. S., Schaal, S.

Autonomous Robots, 36(1-2):11-30, January 2014 (article)

Abstract
In this paper we present an architecture for autonomous manipulation. Our approach is based on the belief that contact interactions during manipulation should be exploited to improve dexterity and that optimizing motion plans is useful to create more robust and repeatable manipulation behaviors. We therefore propose an architecture where state of the art force/torque control and optimization-based motion planning are the core components of the system. We give a detailed description of the modules that constitute the complete system and discuss the challenges inherent to creating such a system. We present experimental results for several grasping and manipulation tasks to demonstrate the performance and robustness of our approach.

am mg

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Occam’s Razor in sensorimotor learning

Genewein, T, Braun, D

Proceedings of the Royal Society of London B, 281(1783):1-7, May 2014 (article)

Abstract
A large number of recent studies suggest that the sensorimotor system uses probabilistic models to predict its environment and makes inferences about unobserved variables in line with Bayesian statistics. One of the important features of Bayesian statistics is Occam's Razor—an inbuilt preference for simpler models when comparing competing models that explain some observed data equally well. Here, we test directly for Occam's Razor in sensorimotor control. We designed a sensorimotor task in which participants had to draw lines through clouds of noisy samples of an unobserved curve generated by one of two possible probabilistic models—a simple model with a large length scale, leading to smooth curves, and a complex model with a short length scale, leading to more wiggly curves. In training trials, participants were informed about the model that generated the stimulus so that they could learn the statistics of each model. In probe trials, participants were then exposed to ambiguous stimuli. In probe trials where the ambiguous stimulus could be fitted equally well by both models, we found that participants showed a clear preference for the simpler model. Moreover, we found that participants’ choice behaviour was quantitatively consistent with Bayesian Occam's Razor. We also show that participants’ drawn trajectories were similar to samples from the Bayesian predictive distribution over trajectories and significantly different from two non-probabilistic heuristics. In two control experiments, we show that the preference of the simpler model cannot be simply explained by a difference in physical effort or by a preference for curve smoothness. Our results suggest that Occam's Razor is a general behavioural principle already present during sensorimotor processing.

ei

DOI [BibTex]

DOI [BibTex]


no image
Generalized Thompson sampling for sequential decision-making and causal inference

Ortega, PA, Braun, DA

Complex Adaptive Systems Modeling, 2(2):1-23, March 2014 (article)

Abstract
Purpose Sampling an action according to the probability that the action is believed to be the optimal one is sometimes called Thompson sampling. Methods Although mostly applied to bandit problems, Thompson sampling can also be used to solve sequential adaptive control problems, when the optimal policy is known for each possible environment. The predictive distribution over actions can then be constructed by a Bayesian superposition of the policies weighted by their posterior probability of being optimal. Results Here we discuss two important features of this approach. First, we show in how far such generalized Thompson sampling can be regarded as an optimal strategy under limited information processing capabilities that constrain the sampling complexity of the decision-making process. Second, we show how such Thompson sampling can be extended to solve causal inference problems when interacting with an environment in a sequential fashion. Conclusion In summary, our results suggest that Thompson sampling might not merely be a useful heuristic, but a principled method to address problems of adaptive sequential decision-making and causal inference.

ei

DOI [BibTex]

DOI [BibTex]


no image
Learning of grasp selection based on shape-templates

Herzog, A., Pastor, P., Kalakrishnan, M., Righetti, L., Bohg, J., Asfour, T., Schaal, S.

Autonomous Robots, 36(1-2):51-65, January 2014 (article)

Abstract
The ability to grasp unknown objects still remains an unsolved problem in the robotics community. One of the challenges is to choose an appropriate grasp configuration, i.e., the 6D pose of the hand relative to the object and its finger configuration. In this paper, we introduce an algorithm that is based on the assumption that similarly shaped objects can be grasped in a similar way. It is able to synthesize good grasp poses for unknown objects by finding the best matching object shape templates associated with previously demonstrated grasps. The grasp selection algorithm is able to improve over time by using the information of previous grasp attempts to adapt the ranking of the templates to new situations. We tested our approach on two different platforms, the Willow Garage PR2 and the Barrett WAM robot, which have very different hand kinematics. Furthermore, we compared our algorithm with other grasp planners and demonstrated its superior performance. The results presented in this paper show that the algorithm is able to find good grasp configurations for a large set of unknown objects from a relatively small set of demonstrations, and does improve its performance over time.

am mg

link (url) DOI [BibTex]


no image
Assessing randomness and complexity in human motion trajectories through analysis of symbolic sequences

Peng, Z, Genewein, T, Braun, DA

Frontiers in Human Neuroscience, 8(168):1-13, March 2014 (article)

Abstract
Complexity is a hallmark of intelligent behavior consisting both of regular patterns and random variation. To quantitatively assess the complexity and randomness of human motion, we designed a motor task in which we translated subjects' motion trajectories into strings of symbol sequences. In the first part of the experiment participants were asked to perform self-paced movements to create repetitive patterns, copy pre-specified letter sequences, and generate random movements. To investigate whether the degree of randomness can be manipulated, in the second part of the experiment participants were asked to perform unpredictable movements in the context of a pursuit game, where they received feedback from an online Bayesian predictor guessing their next move. We analyzed symbol sequences representing subjects' motion trajectories with five common complexity measures: predictability, compressibility, approximate entropy, Lempel-Ziv complexity, as well as effective measure complexity. We found that subjects’ self-created patterns were the most complex, followed by drawing movements of letters and self-paced random motion. We also found that participants could change the randomness of their behavior depending on context and feedback. Our results suggest that humans can adjust both complexity and regularity in different movement types and contexts and that this can be assessed with information-theoretic measures of the symbolic sequences generated from movement trajectories.

ei

DOI [BibTex]

DOI [BibTex]

2006


no image
Structure validation of the Josephin domain of ataxin-3: Conclusive evidence for an open conformation

Nicastro, G., Habeck, M., Masino, L., Svergun, DI., Pastore, A.

Journal of Biomolecular NMR, 36(4):267-277, December 2006 (article)

Abstract
The availability of new and fast tools in structure determination has led to a more than exponential growth of the number of structures solved per year. It is therefore increasingly essential to assess the accuracy of the new structures by reliable approaches able to assist validation. Here, we discuss a specific example in which the use of different complementary techniques, which include Bayesian methods and small angle scattering, resulted essential for validating the two currently available structures of the Josephin domain of ataxin-3, a protein involved in the ubiquitin/proteasome pathway and responsible for neurodegenerative spinocerebellar ataxia of type 3. Taken together, our results demonstrate that only one of the two structures is compatible with the experimental information. Based on the high precision of our refined structure, we show that Josephin contains an open cleft which could be directly implicated in the interaction with polyubiquitin chains and other partners.

ei

Web DOI [BibTex]

2006


Web DOI [BibTex]


no image
A Unifying View of Wiener and Volterra Theory and Polynomial Kernel Regression

Franz, M., Schölkopf, B.

Neural Computation, 18(12):3097-3118, December 2006 (article)

Abstract
Volterra and Wiener series are perhaps the best understood nonlinear system representations in signal processing. Although both approaches have enjoyed a certain popularity in the past, their application has been limited to rather low-dimensional and weakly nonlinear systems due to the exponential growth of the number of terms that have to be estimated. We show that Volterra and Wiener series can be represented implicitly as elements of a reproducing kernel Hilbert space by utilizing polynomial kernels. The estimation complexity of the implicit representation is linear in the input dimensionality and independent of the degree of nonlinearity. Experiments show performance advantages in terms of convergence, interpretability, and system sizes that can be handled.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Statistical Analysis of Slow Crack Growth Experiments

Pfingsten, T., Glien, K.

Journal of the European Ceramic Society, 26(15):3061-3065, November 2006 (article)

Abstract
A common approach for the determination of Slow Crack Growth (SCG) parameters are the static and dynamic loading method. Since materials with small Weibull module show a large variability in strength, a correct statistical analysis of the data is indispensable. In this work we propose the use of the Maximum Likelihood method and a Baysian analysis, which, in contrast to the standard procedures, take into account that failure strengths are Weibull distributed. The analysis provides estimates for the SCG parameters, the Weibull module, and the corresponding confidence intervals and overcomes the necessity of manual differentiation between inert and fatigue strength data. We compare the methods to a Least Squares approach, which can be considered the standard procedure. The results for dynamic loading data from the glass sealing of MEMS devices show that the assumptions inherent to the standard approach lead to significantly different estimates.

ei

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Mining frequent stem patterns from unaligned RNA sequences

Hamada, M., Tsuda, K., Kudo, T., Kin, T., Asai, K.

Bioinformatics, 22(20):2480-2487, October 2006 (article)

Abstract
Motivation: In detection of non-coding RNAs, it is often necessary to identify the secondary structure motifs from a set of putative RNA sequences. Most of the existing algorithms aim to provide the best motif or few good motifs, but biologists often need to inspect all the possible motifs thoroughly. Results: Our method RNAmine employs a graph theoretic representation of RNA sequences, and detects all the possible motifs exhaustively using a graph mining algorithm. The motif detection problem boils down to finding frequently appearing patterns in a set of directed and labeled graphs. In the tasks of common secondary structure prediction and local motif detection from long sequences, our method performed favorably both in accuracy and in efficiency with the state-of-the-art methods such as CMFinder.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Large-Scale Gene Expression Profiling Reveals Major Pathogenetic Pathways of Cartilage Degeneration in Osteoarthritis

Aigner, T., Fundel, K., Saas, J., Gebhard, P., Haag, J., Weiss, T., Zien, A., Obermayr, F., Zimmer, R., Bartnik, E.

Arthritis and Rheumatism, 54(11):3533-3544, October 2006 (article)

Abstract
Objective. Despite many research efforts in recent decades, the major pathogenetic mechanisms of osteo- arthritis (OA), including gene alterations occurring during OA cartilage degeneration, are poorly under- stood, and there is no disease-modifying treatment approach. The present study was therefore initiated in order to identify differentially expressed disease-related genes and potential therapeutic targets. Methods. This investigation consisted of a large gene expression profiling study performed based on 78 normal and disease samples, using a custom-made complementar y DNA array covering >4,000 genes. Results. Many differentially expressed genes were identified, including the expected up-regulation of ana- bolic and catabolic matrix genes. In particular, the down-regulation of important oxidative defense genes, i.e., the genes for superoxide dismutases 2 and 3 and glutathione peroxidase 3, was prominent. This indicates that continuous oxidative stress to the cells and the matrix is one major underlying pathogenetic mecha- nism in OA. Also, genes that are involved in the phenot ypic stabilit y of cells, a feature that is greatly reduced in OA cartilage, appeared to be suppressed. Conclusion. Our findings provide a reference data set on gene alterations in OA cartilage and, importantly, indicate major mechanisms underlying central cell bio- logic alterations that occur during the OA disease process. These results identify molecular targets that can be further investigated in the search for therapeutic interventions.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Implicit Surface Modelling with a Globally Regularised Basis of Compact Support

Walder, C., Schölkopf, B., Chapelle, O.

Computer Graphics Forum, 25(3):635-644, September 2006 (article)

Abstract
We consider the problem of constructing a globally smooth analytic function that represents a surface implicitly by way of its zero set, given sample points with surface normal vectors. The contributions of the paper include a novel means of regularising multi-scale compactly supported basis functions that leads to the desirable interpolation properties previously only associated with fully supported bases. We also provide a regularisation framework for simpler and more direct treatment of surface normals, along with a corresponding generalisation of the representer theorem lying at the core of kernel-based machine learning methods. We demonstrate the techniques on 3D problems of up to 14 million data points, as well as 4D time series data and four-dimensional interpolation between three-dimensional shapes.

ei

PDF GZIP DOI [BibTex]


no image
Semi-Supervised Learning

Chapelle, O., Schölkopf, B., Zien, A.

pages: 508, Adaptive computation and machine learning, MIT Press, Cambridge, MA, USA, September 2006 (book)

Abstract
In the field of machine learning, semi-supervised learning (SSL) occupies the middle ground, between supervised learning (in which all training examples are labeled) and unsupervised learning (in which no label data are given). Interest in SSL has increased in recent years, particularly because of application domains in which unlabeled data are plentiful, such as images, text, and bioinformatics. This first comprehensive overview of SSL presents state-of-the-art algorithms, a taxonomy of the field, selected applications, benchmark experiments, and perspectives on ongoing and future research. Semi-Supervised Learning first presents the key assumptions and ideas underlying the field: smoothness, cluster or low-density separation, manifold structure, and transduction. The core of the book is the presentation of SSL methods, organized according to algorithmic strategies. After an examination of generative models, the book describes algorithms that implement the low-density separation assumption, graph-based methods, and algorithms that perform two-step learning. The book then discusses SSL applications and offers guidelines for SSL practitioners by analyzing the results of extensive benchmark experiments. Finally, the book looks at interesting directions for SSL research. The book closes with a discussion of the relationship between semi-supervised learning and transduction.

ei

Web [BibTex]

Web [BibTex]


no image
An Online Support Vector Machine for Abnormal Events Detection

Davy, M., Desobry, F., Gretton, A., Doncarli, C.

Signal Processing, 86(8):2009-2025, August 2006 (article)

Abstract
The ability to detect online abnormal events in signals is essential in many real-world Signal Processing applications. Previous algorithms require an explicit signal statistical model, and interpret abnormal events as statistical model abrupt changes. Corresponding implementation relies on maximum likelihood or on Bayes estimation theory with generally excellent performance. However, there are numerous cases where a robust and tractable model cannot be obtained, and model-free approaches need to be considered. In this paper, we investigate a machine learning, descriptor-based approach that does not require an explicit descriptors statistical model, based on Support Vector novelty detection. A sequential optimization algorithm is introduced. Theoretical considerations as well as simulations on real signals demonstrate its practical efficiency.

ei

PDF PostScript PDF DOI [BibTex]

PDF PostScript PDF DOI [BibTex]


no image
Integrating Structured Biological data by Kernel Maximum Mean Discrepancy

Borgwardt, K., Gretton, A., Rasch, M., Kriegel, H., Schölkopf, B., Smola, A.

Bioinformatics, 22(4: ISMB 2006 Conference Proceedings):e49-e57, August 2006 (article)

Abstract
Motivation: Many problems in data integration in bioinformatics can be posed as one common question: Are two sets of observations generated by the same distribution? We propose a kernel-based statistical test for this problem, based on the fact that two distributions are different if and only if there exists at least one function having different expectation on the two distributions. Consequently we use the maximum discrepancy between function means as the basis of a test statistic. The Maximum Mean Discrepancy (MMD) can take advantage of the kernel trick, which allows us to apply it not only to vectors, but strings, sequences, graphs, and other common structured data types arising in molecular biology. Results: We study the practical feasibility of an MMD-based test on three central data integration tasks: Testing cross-platform comparability of microarray data, cancer diagnosis, and data-content based schema matching for two different protein function classification schemas. In all of these experiments, including high-dimensional ones, MMD is very accurate in finding samples that were generated from the same distribution, and outperforms its best competitors. Conclusions: We have defined a novel statistical test of whether two samples are from the same distribution, compatible with both multivariate and structured data, that is fast, easy to implement, and works well, as confirmed by our experiments.

ei

Web DOI [BibTex]

Web DOI [BibTex]