Header logo is

Matrix Exponentiated Gradient Updates for On-line Learning and Bregman Projection

2005

Article

ei


We address the problem of learning a symmetric positive definite matrix. The central issue is to design parameter updates that preserve positive definiteness. Our updates are motivated with the von Neumann divergence. Rather than treating the most general case, we focus on two key applications that exemplify our methods: on-line learning with a simple square loss, and finding a symmetric positive definite matrix subject to linear constraints. The updates generalize the exponentiated gradient (EG) update and AdaBoost, respectively: the parameter is now a symmetric positive definite matrix of trace one instead of a probability vector (which in this context is a diagonal positive definite matrix with trace one). The generalized updates use matrix logarithms and exponentials to preserve positive definiteness. Most importantly, we show how the derivation and the analyses of the original EG update and AdaBoost generalize to the non-diagonal case. We apply the resulting matrix exponentiated gradient (MEG) update and DefiniteBoost to the problem of learning a kernel matrix from distance measurements.

Author(s): Tsuda, K. and Rätsch, G. and Warmuth, M.
Journal: Journal of Machine Learning Research
Volume: 6
Pages: 995-1018
Year: 2005
Month: June
Day: 0

Department(s): Empirical Inference
Bibtex Type: Article (article)

Digital: 0
Language: en
Organization: Max-Planck-Gesellschaft
School: Biologische Kybernetik

Links: PDF

BibTex

@article{4143,
  title = {Matrix Exponentiated Gradient Updates for On-line Learning and Bregman Projection},
  author = {Tsuda, K. and R{\"a}tsch, G. and Warmuth, M.},
  journal = {Journal of Machine Learning Research},
  volume = {6},
  pages = {995-1018},
  organization = {Max-Planck-Gesellschaft},
  school = {Biologische Kybernetik},
  month = jun,
  year = {2005},
  doi = {},
  month_numeric = {6}
}