I'm working on image denoising, which is the problem of finding a clean image, given a noisy one. Noise in images arises for a number of reasons, including imperfect digital image sensors. The problem is of growing importance due to an explosion in the number of digital images recorded every day and the fact that all digital images contain some amount of noise. My personal web-page is kept more up-to-date.
My research can be divided into three main categories:
Astronomical image denoising with a pixel-specific noise model. For digital photographsof astronomical objects, where exposure times are long, the dark-current noise is a signicantsource of noise. Usually, denoising methods assume additive white Gaussian noise, with equal variance for each pixel. However, dark-current noise has different properties for every pixel. We use a pixel-specic noise model to handle dark-current noise, as well as an image prior adapted to astronomical images. Our method is shown to perform well in a laboratory environment, and produces visually appealing results in a real-world setting.
A multi-scale meta-procedure for improving existing denoising algorithms. Most denoising algorithms focus on recovering high frequencies. However, for high noise levels it is also important to recover low frequencies. We present a multi-scale meta-procedure that applies existing denoising algorithms across different scales and combines the resulting images into a single denoised image. We show that our method can improve the results achieved by many denoising algorithms.
State-of-the-art image denoising with multi-layer perceptrons. Many of the best-performing denoising methods rely on a cleverly engineered algorithm. In contrast, we take a learning approach to denoising and train a multi-layer perceptron to denoise image patches. Using this approach, we outperform the previous state-of-the-art. Our approach also achieves results that are superior to one type of theoretical bound and goes a large way toward closing the gap with a second type of theoretical bound. Furthermore, we achieve outstanding results on other types of noise, including JPEG-artifacts and Poisson noise. Also, we show that multilayer perceptrons can be used to combine the results of several denoising algorithms. This approach often yields better results than the best method in the combination. We discuss in detail which trade-offs have to be considered during the training procedure. We are also able to make observations regarding the functioning principle of multi-layer perceptrons for image denoising.
Cavusoglu, M., Pohmann, R., Burger, H., Uludag, K.
Magnetic Resonance in Medicine, 69(2):524-530, Febuary 2013 (article)
Most experiments assume a global transit delay time with blood flowing from the tagging region to the imaging slice in plug flow without any dispersion of the magnetization. However, because of cardiac pulsation, nonuniform cross-sectional flow profile, and complex vessel networks, the transit delay time is not a single value but follows a distribution. In this study, we explored the regional effects of magnetization dispersion on quantitative perfusion imaging for varying transit times within a very large interval from the direct comparison of pulsed, pseudo-continuous, and dual-coil continuous arterial spin labeling encoding schemes. Longer distances between tagging and imaging region typically used for continuous tagging schemes enhance the regional bias on the quantitative cerebral blood flow measurement causing an underestimation up to 37% when plug flow is assumed as in the standard model.
Weber, B., Spaeth, N., Wyss, M., Wild, D., Burger, C., Stanley, R., Buck, A.
Journal of Cerebral Blood Flow and Metabolism, 23(12):1455-1460, December 2003 (article)
Beta-probes are a relatively new tool for tracer kinetic studies in animals. They are highly suited to evaluate new positron emission tomography tracers or measure physiologic parameters at rest and after some kind of stimulation or intervention. In many of these experiments, the knowledge of CBF is highly important. Thus, the purpose of this study was to evaluate the method of CBF measurements using a beta-probe and H215O. CBF was measured in the barrel cortex of eight rats at baseline and after acetazolamide challenge. Trigeminal nerve stimulation was additionally performed in five animals. In each category, three injections of 250 to 300 MBq H215O were performed at 10-minute intervals. Data were analyzed using a standard one-tissue compartment model (K1 = CBF, k2 = CBF/p, where p is the partition coefficient). Values for K1 were 0.35 plusminus 0.09, 0.58 plusminus 0.16, and 0.49 plusminus 0.03 mL dot min-1 dot mL-1 at rest, after acetazolamide challenge, and during trigeminal nerve stimulation, respectively. The corresponding values for k2 were 0.55 plusminus 0.12, 0.94 plusminus 0.16, and 0.85 plusminus 0.12 min-7, and for p were 0.64 plusminus 0.05, 0.61 plusminus 0.07, and 0.59 plusminus 0.06.The standard deviation of the difference between two successive experiments, a measure for the reproducibility of the method, was 10.1%, 13.0%, and 5.7% for K1, k2, and p, respectively. In summary, beta-probes in conjunction with H215O allow the reproducible quantitative measurement of CBF, although some systematic underestimation seems to occur, probably because of partial volume effects.
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems