Header logo is


2017


no image
Improving performance of linear field generation with multi-coil setup by optimizing coils position

Aghaeifar, A., Loktyushin, A., Eschelbach, M., Scheffler, K.

Magnetic Resonance Materials in Physics, Biology and Medicine, 30(Supplement 1):S259, 34th Annual Scientific Meeting of the European Society for Magnetic Resonance in Medicine and Biology (ESMRMB), October 2017 (poster)

ei

link (url) DOI [BibTex]

2017


link (url) DOI [BibTex]


no image
Editorial for the Special Issue on Microdevices and Microsystems for Cell Manipulation

Hu, W., Ohta, A. T.

8, Multidisciplinary Digital Publishing Institute, September 2017 (misc)

pi

DOI [BibTex]

DOI [BibTex]


Thumb xl provisional
Parameterized Model of 2D Articulated Human Shape

Black, M. J., Freifeld, O., Weiss, A., Loper, M., Guan, P.

September 2017, U.S.~Patent 9,761,060 (misc)

Abstract
Disclosed are computer-readable devices, systems and methods for generating a model of a clothed body. The method includes generating a model of an unclothed human body, the model capturing a shape or a pose of the unclothed human body, determining two-dimensional contours associated with the model, and computing deformations by aligning a contour of a clothed human body with a contour of the unclothed human body. Based on the two-dimensional contours and the deformations, the method includes generating a first two-dimensional model of the unclothed human body, the first two-dimensional model factoring the deformations of the unclothed human body into one or more of a shape variation component, a viewpoint change, and a pose variation and learning an eigen-clothing model using principal component analysis applied to the deformations, wherein the eigen-clothing model classifies different types of clothing, to yield a second two-dimensional model of a clothed human body.

ps

Google Patents [BibTex]


Thumb xl full outfit
Physical and Behavioral Factors Improve Robot Hug Quality

Block, A. E., Kuchenbecker, K. J.

Workshop Paper (2 pages) presented at the RO-MAN Workshop on Social Interaction and Multimodal Expression for Socially Intelligent Robots, Lisbon, Portugal, August 2017 (misc)

Abstract
A hug is one of the most basic ways humans can express affection. As hugs are so common, a natural progression of robot development is to have robots one day hug humans as seamlessly as these intimate human-human interactions occur. This project’s purpose is to evaluate human responses to different robot physical characteristics and hugging behaviors. Specifically, we aim to test the hypothesis that a warm, soft, touch-sensitive PR2 humanoid robot can provide humans with satisfying hugs by matching both their hugging pressure and their hugging duration. Thirty participants experienced and evaluated twelve hugs with the robot, divided into three randomly ordered trials that focused on physical robot char- acteristics and nine randomly ordered trials with varied hug pressure and duration. We found that people prefer soft, warm hugs over hard, cold hugs. Furthermore, users prefer hugs that physically squeeze them and release immediately when they are ready for the hug to end.

hi

Project Page [BibTex]

Project Page [BibTex]


Thumb xl bodytalk
Crowdshaping Realistic 3D Avatars with Words

Streuber, S., Ramirez, M. Q., Black, M., Zuffi, S., O’Toole, A., Hill, M. Q., Hahn, C. A.

August 2017, Application PCT/EP2017/051954 (misc)

Abstract
A method for generating a body shape, comprising the steps: - receiving one or more linguistic descriptors related to the body shape; - retrieving an association between the one or more linguistic descriptors and a body shape; and - generating the body shape, based on the association.

ps

Google Patents [BibTex]

Google Patents [BibTex]


no image
Physically Interactive Exercise Games with a Baxter Robot

Fitter, N. T., Kuchenbecker, K. J.

Hands-on demonstration presented at the IEEE World Haptics Conference (WHC), Munich, Germany, June 2017 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


Thumb xl dapepatent
System and method for simulating realistic clothing

Black, M. J., Guan, P.

June 2017, U.S.~Patent 9,679,409 B2 (misc)

Abstract
Systems, methods, and computer-readable storage media for simulating realistic clothing. The system generates a clothing deformation model for a clothing type, wherein the clothing deformation model factors a change of clothing shape due to rigid limb rotation, pose-independent body shape, and pose-dependent deformations. Next, the system generates a custom-shaped garment for a given body by mapping, via the clothing deformation model, body shape parameters to clothing shape parameters. The system then automatically dresses the given body with the custom- shaped garment.

ps

Google Patents pdf [BibTex]


no image
Proton Pack: Visuo-Haptic Surface Data Recording

Burka, A., Kuchenbecker, K. J.

Hands-on demonstration presented at the IEEE World Haptics Conference (WHC), Munich, Germany, June 2017 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


no image
Teaching a Robot to Collaborate with a Human Via Haptic Teleoperation

Hu, S., Kuchenbecker, K. J.

Work-in-progress paper (2 pages) presented at the IEEE World Haptics Conference (WHC), Munich, Germany, June 2017 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


Thumb xl full outfit
How Should Robots Hug?

Block, A. E., Kuchenbecker, K. J.

Work-in-progress paper (2 pages) presented at the IEEE World Haptics Conference (WHC), Munich, Germany, June 2017 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


no image
An Interactive Augmented-Reality Video Training Platform for the da Vinci Surgical System

Carlson, J., Kuchenbecker, K. J.

Workshop paper (3 pages) presented at the ICRA Workshop on C4 Surgical Robots, Singapore, May 2017 (misc)

Abstract
Teleoperated surgical robots such as the Intuitive da Vinci Surgical System facilitate minimally invasive surgeries, which decrease risk to patients. However, these systems can be difficult to learn, and existing training curricula on surgical simulators do not offer students the realistic experience of a full operation. This paper presents an augmented-reality video training platform for the da Vinci that will allow trainees to rehearse any surgery recorded by an expert. While the trainee operates a da Vinci in free space, they see their own instruments overlaid on the expert video. Tools are identified in the source videos via color segmentation and kernelized correlation filter tracking, and their depth is calculated from the da Vinci’s stereoscopic video feed. The user tries to follow the expert’s movements, and if any of their tools venture too far away, the system provides instantaneous visual feedback and pauses to allow the user to correct their motion. The trainee can also rewind the expert video by bringing either da Vinci tool very close to the camera. This combined and augmented video provides the user with an immersive and interactive training experience.

hi

[BibTex]

[BibTex]


no image
Estimating B0 inhomogeneities with projection FID navigator readouts

Loktyushin, A., Ehses, P., Schölkopf, B., Scheffler, K.

25th Annual Meeting and Exhibition of the International Society for Magnetic Resonance in Medicine (ISMRM), April 2017 (poster)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Image Quality Improvement by Applying Retrospective Motion Correction on Quantitative Susceptibility Mapping and R2*

Feng, X., Loktyushin, A., Deistung, A., Reichenbach, J.

25th Annual Meeting and Exhibition of the International Society for Magnetic Resonance in Medicine (ISMRM), April 2017 (poster)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Hand-Clapping Games with a Baxter Robot

Fitter, N. T., Kuchenbecker, K. J.

Hands-on demonstration presented at ACM/IEEE International Conference on Human-Robot Interaction (HRI), Vienna, Austria, March 2017 (misc)

Abstract
Robots that work alongside humans might be more effective if they could forge a strong social bond with their human partners. Hand-clapping games and other forms of rhythmic social-physical interaction may foster human-robot teamwork, but the design of such interactions has scarcely been explored. At the HRI 2017 conference, we will showcase several such interactions taken from our recent work with the Rethink Robotics Baxter Research Robot, including tempo-matching, Simon says, and Pat-a-cake-like games. We believe conference attendees will be both entertained and intrigued by this novel demonstration of social-physical HRI.

hi

Project Page [BibTex]

Project Page [BibTex]


no image
Automatic OSATS Rating of Trainee Skill at a Pediatric Laparoscopic Suturing Task

Oquendo, Y. A., Riddle, E. W., Hiller, D., Blinman, T. A., Kuchenbecker, K. J.

Surgical Endoscopy, 31(Supplement 1):S28, Extended abstract presented as a podium presentation at the Annual Meeting of the Society of American Gastrointestinal and Endoscopic Surgeons (SAGES), Springer, Houston, USA, March 2017 (misc)

Abstract
Introduction: Minimally invasive surgery has revolutionized surgical practice, but challenges remain. Trainees must acquire complex technical skills while minimizing patient risk, and surgeons must maintain their skills for rare procedures. These challenges are magnified in pediatric surgery due to the smaller spaces, finer tissue, and relative dearth of both inanimate and virtual simulators. To build technical expertise, trainees need opportunities for deliberate practice with specific performance feedback, which is typically provided via tedious human grading. This study aimed to validate a novel motion-tracking system and machine learning algorithm for automatically evaluating trainee performance on a pediatric laparoscopic suturing task using a 1–5 OSATS Overall Skill rating. Methods: Subjects (n=14) ranging from medical students to fellows per- formed one or two trials of an intracorporeal suturing task in a custom pediatric laparoscopy training box (Fig. 1) after watching a video of ideal performance by an expert. The position and orientation of the tools and endoscope were recorded over time using Ascension trakSTAR magnetic motion-tracking sensors, and both instrument grasp angles were recorded over time using flex sensors on the handles. The 27 trials were video-recorded and scored on the OSATS scale by a senior fellow; ratings ranged from 1 to 4. The raw motion data from each trial was processed to calculate over 200 preliminary motion parameters. Regularized least-squares regression (LASSO) was used to identify the most predictive parameters for inclusion in a regression tree. Model performance was evaluated by leave-one-subject-out cross validation, wherein the automatic scores given to each subject’s trials (by a model trained on all other data) are compared to the corresponding human rater scores. Results: The best-performing LASSO algorithm identified 14 predictive parameters for inclusion in the regression tree, including completion time, linear path length, angular path length, angular acceleration, grasp velocity, and grasp acceleration. The final model’s raw output showed a strong positive correlation of 0.87 with the reviewer-generated scores, and rounding the output to the nearest integer yielded a leave-one-subject-out cross-validation accuracy of 77.8%. Results are summarized in the confusion matrix (Table 1). Conclusions: Our novel motion-tracking system and regression model automatically gave previously unseen trials overall skill scores that closely match scores from an expert human rater. With additional data and further development, this system may enable creation of a motion-based training platform for pediatric laparoscopic surgery and could yield insights into the fundamental components of surgical skill.

hi

[BibTex]

[BibTex]


no image
How Much Haptic Surface Data is Enough?

Burka, A., Kuchenbecker, K. J.

Workshop paper (5 pages) presented at the AAAI Spring Symposium on Interactive Multi-Sensory Object Perception for Embodied Agents, Stanford, USA, March 2017 (misc)

Abstract
The Proton Pack is a portable visuo-haptic surface interaction recording device that will be used to collect a vast multimodal dataset, intended for robots to use as part of an approach to understanding the world around them. In order to collect a useful dataset, we want to pick a suitable interaction duration for each surface, noting the tradeoff between data collection resources and completeness of data. One interesting approach frames the data collection process as an online learning problem, building an incremental surface model and using that model to decide when there is enough data. Here we examine how to do such online surface modeling and when to stop collecting data, using kinetic friction as a first domain in which to apply online modeling.

hi

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


Thumb xl screen shot 2018 05 04 at 11.44.04
Enhancing Human-Computer Interaction via Electrovibration

Emgin, S. E., Sadia, B., Vardar, Y., Basdogan, C.

Demo in IEEE World Haptics, 2017 (misc)

Abstract
We present a compact tablet that displays electrostatic haptic feedback to the user. We track user?s finger position via an infrared frame and then display haptic feedback through a capacitive touch screen based on her/his position. In order to demonstrate practical utility of the proposed system, the following applications have been developed: (1) Online Shopping application allows users to be able to feel the cord density of two different fabrics. (2) Education application asks user to add two numbers by dragging one number onto another in order to match the sum. After selecting the first number, haptic feedback assists user to select the right pair. (3) Gaming/Entertainment application presents users a bike riding experience on three different road textures -smooth, bumpy, and sandy. (4) User Interface application in which users are asked to drag two visually identical folders. While dragging, users are able to differentiate the amount of data in each folder based on haptic resistance.

hi

[BibTex]

[BibTex]


Thumb xl screen shot 2018 05 04 at 11.42.00
Reproduction of textures based on electrovibration

Fiedler, T., Vardar, Y., Strese, M., Steinbach, E., Basdogan, C.

Demo in IEEE World Haptics, 2017 (misc)

Abstract
This demonstration presents an approach to represent textures based on electovibration. We collect acceleration data which occurs while sliding a tool tip over a real texture surface. The prerecorded data was collected by a ADXL335 accelerometer, which is mounted on a FALCON device moving on the x-axis with a regulated velocity. In order to replicate the same acceleration with electrovibration, we found two problems. The frequency of one sine wave shifts to the double frequency. This effect originates from the electrostatic force between the finger pad and the tactile display as proposed by Kactmarek et Al. [1]. Taking the square root of the input signal corrects the effect. This was also earlier proposed by [1, 2, 3] However, if not only one but multiple sine waves are displayed interference occur and acceleration signals from real textures may not feel perceptually realistic. We propose to display only the dominant frequencies from a real texture signal. Peak frequencies are determined within the respect of the JND of 11 percent found by earlier literature. A new sine wave signal with the dominant frequencies is created. In the demo, we will let the attendees feel the differences between prerecorded and artificially created textures.

hi

[BibTex]

[BibTex]


Thumb xl paraview preview
Design of a visualization scheme for functional connectivity data of Human Brain

Bramlage, L.

Hochschule Osnabrück - University of Applied Sciences, 2017 (thesis)

sf

Bramlage_BSc_2017.pdf [BibTex]


no image
Generalized phase locking analysis of electrophysiology data

Safavi, S., Panagiotaropoulos, T., Kapoor, V., Logothetis, N. K., Besserve, M.

ESI Systems Neuroscience Conference (ESI-SyNC 2017): Principles of Structural and Functional Connectivity, 2017 (poster)

ei

[BibTex]

[BibTex]

2011


no image
Spatiotemporal mapping of rhythmic activity in the inferior convexity of the macaque prefrontal cortex

Panagiotaropoulos, T., Besserve, M., Crocker, B., Kapoor, V., Tolias, A., Panzeri, S., Logothetis, N.

41(239.15), 41st Annual Meeting of the Society for Neuroscience (Neuroscience), November 2011 (poster)

Abstract
The inferior convexity of the macaque prefrontal cortex (icPFC) is known to be involved in higher order processing of sensory information mediating stimulus selection, attention and working memory. Until now, the vast majority of electrophysiological investigations of the icPFC employed single electrode recordings. As a result, relatively little is known about the spatiotemporal structure of neuronal activity in this cortical area. Here we study in detail the spatiotemporal properties of local field potentials (LFP's) in the icPFC using multi electrode recordings during anesthesia. We computed the LFP-LFP coherence as a function of frequency for thousands of pairs of simultaneously recorded sites anterior to the arcuate and inferior to the principal sulcus. We observed two distinct peaks of coherent oscillatory activity between approximately 4-10 and 15-25 Hz. We then quantified the instantaneous phase of these frequency bands using the Hilbert transform and found robust phase gradients across recording sites. The dependency of the phase on the spatial location reflects the existence of traveling waves of electrical activity in the icPFC. The dominant axis of these traveling waves roughly followed the ventral-dorsal plane. Preliminary results show that repeated visual stimulation with a 10s movie had no dramatic effect on the spatial structure of the traveling waves. Traveling waves of electrical activity in the icPFC could reflect highly organized cortical processing in this area of prefrontal cortex.

ei

Web [BibTex]

2011


Web [BibTex]


no image
Evaluation and Optimization of MR-Based Attenuation Correction Methods in Combined Brain PET/MR

Mantlik, F., Hofmann, M., Bezrukov, I., Schmidt, H., Kolb, A., Beyer, T., Reimold, M., Schölkopf, B., Pichler, B.

2011(MIC18.M-96), 2011 IEEE Nuclear Science Symposium, Medical Imaging Conference (NSS-MIC), October 2011 (poster)

Abstract
Combined PET/MR provides simultaneous molecular and functional information in an anatomical context with unique soft tissue contrast. However, PET/MR does not support direct derivation of attenuation maps of objects and tissues within the measured PET field-of-view. Valid attenuation maps are required for quantitative PET imaging, specifically for scientific brain studies. Therefore, several methods have been proposed for MR-based attenuation correction (MR-AC). Last year, we performed an evaluation of different MR-AC methods, including simple MR thresholding, atlas- and machine learning-based MR-AC. CT-based AC served as gold standard reference. RoIs from 2 anatomic brain atlases with different levels of detail were used for evaluation of correction accuracy. We now extend our evaluation of different MR-AC methods by using an enlarged dataset of 23 patients from the integrated BrainPET/MR (Siemens Healthcare). Further, we analyze options for improving the MR-AC performance in terms of speed and accuracy. Finally, we assess the impact of ignoring BrainPET positioning aids during the course of MR-AC. This extended study confirms the overall prediction accuracy evaluation results of the first evaluation in a larger patient population. Removing datasets affected by metal artifacts from the Atlas-Patch database helped to improve prediction accuracy, although the size of the database was reduced by one half. Significant improvement in prediction speed can be gained at a cost of only slightly reduced accuracy, while further optimizations are still possible.

ei

Web [BibTex]

Web [BibTex]


no image
Atlas- and Pattern Recognition Based Attenuation Correction on Simultaneous Whole-Body PET/MR

Bezrukov, I., Schmidt, H., Mantlik, F., Schwenzer, N., Hofmann, M., Schölkopf, B., Pichler, B.

2011(MIC18.M-116), 2011 IEEE Nuclear Science Symposium, Medical Imaging Conference (NSS-MIC), October 2011 (poster)

Abstract
With the recent availability of clinical whole-body PET/MRI it is possible to evaluate and further develop MR-based attenuation correction methods using simultaneously acquired PET/MR data. We present first results for MRAC on patient data acquired on a fully integrated whole-body PET/MRI (Biograph mMR, Siemens) using our method that applies atlas registration and pattern recognition (ATPR) and compare them to the segmentation-based (SEG) method provided by the manufacturer. The ATPR method makes use of a database of previously aligned pairs of MR-CT volumes to predict attenuation values on a continuous scale. The robustness of the method in presence of MR artifacts was improved by location and size based detection. Lesion to liver and lesion to blood ratios (LLR and LBR) were compared for both methods on 29 iso-contour ROIs in 4 patients. ATPR showed >20% higher LBR and LLR for ROIs in and >7% near osseous tissue. For ROIs in soft tissue, both methods yielded similar ratios with max. differences <6% . For ROIs located within metal artifacts in the MR image, ATPR showed >190% higher LLR and LBR than SEG, where ratios <0.1 occured. For lesions in the neighborhood of artifacts, both ratios were >15% higher for ATPR. If artifacts in MR volumes caused by metal implants are not accounted for in the computation of attenuation maps, they can lead to a strong decrease of lesion to background ratios, even to disappearance of hot spots. Metal implants are likely to occur in the patient collective receiving combined PET/MR scans, of our first 10 patients, 3 had metal implants. Our method is currently able to account for artifacts in the pelvis caused by prostheses. The ability of the ATPR method to account for bone leads to a significant increase of LLR and LBR in osseous tissue, which supports our previous evaluations with combined PET/CT and PET/MR data. For lesions within soft tissue, lesion to background ratios of ATPR and SEG were comparable.

ei

Web [BibTex]

Web [BibTex]


no image
Retrospective blind motion correction of MR images

Loktyushin, A., Nickisch, H., Pohmann, R.

Magnetic Resonance Materials in Physics, Biology and Medicine, 24(Supplement 1):498, 28th Annual Scientific Meeting ESMRMB, October 2011 (poster)

Abstract
We present a retrospective method, which significantly reduces ghosting and blurring artifacts due to subject motion. No modifications to the sequence (as in [2, 3]), or the use of additional equipment (as in [1]) are required. Our method iteratively searches for the transformation, that applied to the lines in k-space -- yields the sparsest Laplacian filter output in the spatial domain.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Model based reconstruction for GRE EPI

Blecher, W., Pohmann, R., Schölkopf, B., Seeger, M.

Magnetic Resonance Materials in Physics, Biology and Medicine, 24(Supplement 1):493-494, 28th Annual Scientific Meeting ESMRMB, October 2011 (poster)

Abstract
Model based nonlinear image reconstruction methods for MRI [3] are at the heart of modern reconstruction techniques (e.g.compressed sensing [6]). In general, models are expressed as a matrix equation where y and u are column vectors of k-space and image data, X model matrix and e independent noise. However, solving the corresponding linear system is not tractable. Therefore fast nonlinear algorithms that minimize a function wrt.the unknown image are the method of choice: In this work a model for gradient echo EPI, is proposed that incorporates N/2 Ghost correction and correction for field inhomogeneities. In addition to reconstruction from full data, the model allows for sparse reconstruction, joint estimation of image, field-, and relaxation-map (like [5,8] for spiral imaging), and improved N/2 ghost correction.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Simultaneous multimodal imaging of patients with bronchial carcinoma in a whole body MR/PET system

Brendle, C., Sauter, A., Schmidt, H., Schraml, C., Bezrukov, I., Martirosian, P., Hetzel, J., Müller, M., Claussen, C., Schwenzer, N., Pfannenberg, C.

Magnetic Resonance Materials in Physics, Biology and Medicine, 24(Supplement 1):141, 28th annual scientific meeting of the European Society for Magnetic Resonance in Medicine and Biology (ESMRB), October 2011 (poster)

Abstract
Purpose/Introduction: Lung cancer is among the most frequent cancers (1). Exact determination of tumour extent and viability is crucial for adequate therapy guidance. [18F]-FDG-PET allows accurate staging and the evaluation of therapy response based on glucose metabolism. Diffusion weighted MRI (DWI) is another promising tool for the evaluation of tumour viability (2,3). The aim of the study was the simultaneous PET-MR acquisition in lung cancer patients and correlation of PET and MR data. Subjects and Methods: Seven patients (age 38-73 years, mean 61 years) with highly suspected or known bronchial carcinoma were examined. First, a [18F]-FDG-PET/CT was performed (injected dose: 332-380 MBq). Subsequently, patients were examined at the whole-body MR/PET (Siemens Biograph mMR). The MRI is a modified 3T Verio whole body system with a magnet bore of 60 cm (max. amplitude gradients 45 mT/m, max. slew rate 200 T/m/s). Concerning the PET, the whole-body MR/PET system comprises 56 detector cassettes with a 59.4 cm transaxial and 25.8 cm axial FoV. The following parameters for PET acquisition were applied: 2 bed positions, 6 min/bed with an average uptake time of 124 min after injection (range: 110-143 min). The attenuation correction of PET data was conducted with a segmentation-based method provided by the manufacturer. Acquired PET data were reconstructed with an iterative 3D OSEM algorithm using 3 iterations and 21 subsets, Gaussian filter of 3 mm. DWI MR images were recorded simultaneously for each bed using two b-values (0/800 s/mm2). SUVmax and ADCmin were assessed in a ROI analysis. The following ratios were calculated: SUVmax(tumor)/SUVmean(liver) and ADCmin(tumor)/ADCmean(muscle). Correlation between SUV and ADC was analyzed (Pearson’s correlation). Results: Diagnostic scans could be obtained in all patients with good tumour delineation. The spatial matching of PET and DWI data was very exact. Most tumours showed a pronounced FDG-uptake in combination with decreased ADC values. Significant correlation was found between SUV and ADC ratios (r = -0.87, p = 0.0118). Discussion/Conclusion: Simultaneous MR/PET imaging of lung cancer is feasible. The whole-body MR/PET system can provide complementary information regarding tumour viability and cellularity which could facilitate a more profound tumour characterization. Further studies have to be done to evaluate the importance of these parameters for therapy decisions and monitoring

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Please \soutdo not touch the robot

Romano, J. M., Kuchenbecker, K. J.

Hands-on demonstration presented at IEEE/RSJ Conference on Intelligent Robots and Systems (IROS), San Francisco, California, sep 2011 (misc)

hi

[BibTex]

[BibTex]


no image
Body-Grounded Tactile Actuators for Playback of Human Physical Contact

Stanley, A. A., Kuchenbecker, K. J.

Hands-on demonstration presented at IEEE World Haptics Conference, Istanbul, Turkey, June 2011 (misc)

hi

[BibTex]

[BibTex]


no image
Support Vector Machines for finding deletions and short insertions using paired-end short reads

Grimm, D., Hagmann, J., König, D., Weigel, D., Borgwardt, KM.

International Conference on Intelligent Systems for Molecular Biology (ISMB), 2011 (poster)

ei

Web [BibTex]

Web [BibTex]


no image
Statistical estimation for optimization problems on graphs

Langovoy, M., Sra, S.

Empirical Inference Symposium, 2011 (poster)

ei

[BibTex]


no image
Transfer Learning with Copulas

Lopez-Paz, D., Hernandez-Lobato, J.

Neural Information Processing Systems (NIPS), 2011 (poster)

ei

PDF [BibTex]

PDF [BibTex]

2006


no image
Some observations on the pedestal effect or dipper function

Henning, B., Wichmann, F.

Journal of Vision, 6(13):50, 2006 Fall Vision Meeting of the Optical Society of America, December 2006 (poster)

Abstract
The pedestal effect is the large improvement in the detectabilty of a sinusoidal “signal” grating observed when the signal is added to a masking or “pedestal” grating of the same spatial frequency, orientation, and phase. We measured the pedestal effect in both broadband and notched noise - noise from which a 1.5-octave band centred on the signal frequency had been removed. Although the pedestal effect persists in broadband noise, it almost disappears in the notched noise. Furthermore, the pedestal effect is substantial when either high- or low-pass masking noise is used. We conclude that the pedestal effect in the absence of notched noise results principally from the use of information derived from channels with peak sensitivities at spatial frequencies different from that of the signal and pedestal. The spatial-frequency components of the notched noise above and below the spatial frequency of the signal and pedestal prevent the use of information about changes in contrast carried in channels tuned to spatial frequencies that are very much different from that of the signal and pedestal. Thus the pedestal or dipper effect measured without notched noise is not a characteristic of individual spatial-frequency tuned channels.

ei

Web DOI [BibTex]

2006


Web DOI [BibTex]


no image
Optimizing Spatial Filters for BCI: Margin- and Evidence-Maximization Approaches

Farquhar, J., Hill, N., Schölkopf, B.

Challenging Brain-Computer Interfaces: MAIA Workshop 2006, pages: 1, November 2006 (poster)

Abstract
We present easy-to-use alternatives to the often-used two-stage Common Spatial Pattern + classifier approach for spatial filtering and classification of Event-Related Desychnronization signals in BCI. We report two algorithms that aim to optimize the spatial filters according to a criterion more directly related to the ability of the algorithms to generalize to unseen data. Both are based upon the idea of treating the spatial filter coefficients as hyperparameters of a kernel or covariance function. We then optimize these hyper-parameters directly along side the normal classifier parameters with respect to our chosen learning objective function. The two objectives considered are margin maximization as used in Support-Vector Machines and the evidence maximization framework used in Gaussian Processes. Our experiments assessed generalization error as a function of the number of training points used, on 9 BCI competition data sets and 5 offline motor imagery data sets measured in Tubingen. Both our approaches sho w consistent improvements relative to the commonly used CSP+linear classifier combination. Strikingly, the improvement is most significant in the higher noise cases, when either few trails are used for training, or with the most poorly performing subjects. This a reversal of the usual "rich get richer" effect in the development of CSP extensions, which tend to perform best when the signal is strong enough to accurately find their additional parameters. This makes our approach particularly suitable for clinical application where high levels of noise are to be expected.

ei

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
Learning Eye Movements

Kienzle, W., Wichmann, F., Schölkopf, B., Franz, M.

Sensory Coding And The Natural Environment, 2006, pages: 1, September 2006 (poster)

Abstract
The human visual system samples images through saccadic eye movements which rapidly change the point of fixation. Although the selection of eye movement targets depends on numerous top-down mechanisms, a number of recent studies have shown that low-level image features such as local contrast or edges play an important role. These studies typically used predefined image features which were afterwards experimentally verified. Here, we follow a complementary approach: instead of testing a set of candidate image features, we infer these hypotheses from the data, using methods from statistical learning. To this end, we train a non-linear classifier on fixated vs. randomly selected image patches without making any physiological assumptions. The resulting classifier can be essentially characterized by a nonlinear combination of two center-surround receptive fields. We find that the prediction performance of this simple model on our eye movement data is indistinguishable from the physiologically motivated model of Itti &amp; Koch (2000) which is far more complex. In particular, we obtain a comparable performance without using any multi-scale representations, long-range interactions or oriented image features.

ei

Web [BibTex]

Web [BibTex]


no image
Classification of natural scenes: Critical features revisited

Drewes, J., Wichmann, F., Gegenfurtner, K.

Journal of Vision, 6(6):561, 6th Annual Meeting of the Vision Sciences Society (VSS), June 2006 (poster)

Abstract
Human observers are capable of detecting animals within novel natural scenes with remarkable speed and accuracy. Despite the seeming complexity of such decisions it has been hypothesized that a simple global image feature, the relative abundance of high spatial frequencies at certain orientations, could underly such fast image classification (A. Torralba & A. Oliva, Network: Comput. Neural Syst., 2003). We successfully used linear discriminant analysis to classify a set of 11.000 images into “animal” and “non-animal” images based on their individual amplitude spectra only (Drewes, Wichmann, Gegenfurtner VSS 2005). We proceeded to sort the images based on the performance of our classifier, retaining only the best and worst classified 400 images (“best animals”, “best distractors” and “worst animals”, “worst distractors”). We used a Go/No-go paradigm to evaluate human performance on this subset of our images. Both reaction time and proportion of correctly classified images showed a significant effect of classification difficulty. Images more easily classified by our algorithm were also classified faster and better by humans, as predicted by the Torralba & Oliva hypothesis. We then equated the amplitude spectra of the 400 images, which, by design, reduced algorithmic performance to chance whereas human performance was only slightly reduced (cf. Wichmann, Rosas, Gegenfurtner, VSS 2005). Most importantly, the same images as before were still classified better and faster, suggesting that even in the original condition features other than specifics of the amplitude spectrum made particular images easy to classify, clearly at odds with the Torralba & Oliva hypothesis.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
The pedestal effect is caused by off-frequency looking, not nonlinear transduction or contrast gain-control

Wichmann, F., Henning, B.

Journal of Vision, 6(6):194, 6th Annual Meeting of the Vision Sciences Society (VSS), June 2006 (poster)

Abstract
The pedestal or dipper effect is the large improvement in the detectabilty of a sinusoidal grating observed when the signal is added to a pedestal or masking grating having the signal‘s spatial frequency, orientation, and phase. The effect is largest with pedestal contrasts just above the ‘threshold‘ in the absence of a pedestal. We measured the pedestal effect in both broadband and notched masking noise---noise from which a 1.5- octave band centered on the signal and pedestal frequency had been removed. The pedestal effect persists in broadband noise, but almost disappears with notched noise. The spatial-frequency components of the notched noise that lie above and below the spatial frequency of the signal and pedestal prevent the use of information about changes in contrast carried in channels tuned to spatial frequencies that are very much different from that of the signal and pedestal. We conclude that the pedestal effect in the absence of notched noise results principally from the use of information derived from channels with peak sensitivities at spatial frequencies that are different from that of the signal and pedestal. Thus the pedestal or dipper effect is not a characteristic of individual spatial-frequency tuned channels.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
The Pedestal Effect is Caused by Off-Frequency Looking, not Nonlinear Transduction or Contrast Gain-Control

Wichmann, F., Henning, G.

9, pages: 174, 9th T{\"u}bingen Perception Conference (TWK), March 2006 (poster)

Abstract
The pedestal or dipper effect is the large improvement in the detectability of a sinusoidal grating observed when the signal is added to a pedestal or masking grating having the signal‘s spatial frequency, orientation, and phase. The effect is largest with pedestal contrasts just above the ‘threshold’ in the absence of a pedestal. We measured the pedestal effect in both broadband and notched masking noise---noise from which a 1.5-octave band centered on the signal and pedestal frequency had been removed. The pedestal effect persists in broadband noise, but almost disappears with notched noise. The spatial-frequency components of the notched noise that lie above and below the spatial frequency of the signal and pedestal prevent the use of information about changes in contrast carried in channels tuned to spatial frequencies that are very much different from that of the signal and pedestal. We conclude that the pedestal effect in the absence of notched noise results principally from the use of information derived from channels with peak sensitivities at spatial frequencies that are different from that of the signal and pedestal. Thus the pedestal or dipper effect is not a characteristic of individual spatial-frequency tuned channels.

ei

Web [BibTex]

Web [BibTex]


no image
Efficient tests for the deconvolution hypothesis

Langovoy, M.

Workshop on Statistical Inverse Problems, March 2006 (poster)

ei

Web [BibTex]

Web [BibTex]


no image
Classification of Natural Scenes: Critical Features Revisited

Drewes, J., Wichmann, F., Gegenfurtner, K.

9, pages: 92, 9th T{\"u}bingen Perception Conference (TWK), March 2006 (poster)

Abstract
Human observers are capable of detecting animals within novel natural scenes with remarkable speed and accuracy. Despite the seeming complexity of such decisions it has been hypothesized that a simple global image feature, the relative abundance of high spatial frequencies at certain orientations, could underly such fast image classification [1]. We successfully used linear discriminant analysis to classify a set of 11.000 images into “animal” and “non-animal” images based on their individual amplitude spectra only [2]. We proceeded to sort the images based on the performance of our classifier, retaining only the best and worst classified 400 images ("best animals", "best distractors" and "worst animals", "worst distractors"). We used a Go/No-go paradigm to evaluate human performance on this subset of our images. Both reaction time and proportion of correctly classified images showed a significant effect of classification difficulty. Images more easily classified by our algorithm were also classified faster and better by humans, as predicted by the Torralba & Oliva hypothesis. We then equated the amplitude spectra of the 400 images, which, by design, reduced algorithmic performance to chance whereas human performance was only slightly reduced [3]. Most importantly, the same images as before were still classified better and faster, suggesting that even in the original condition features other than specifics of the amplitude spectrum made particular images easy to classify, clearly at odds with the Torralba & Oliva hypothesis.

ei

Web [BibTex]

Web [BibTex]


no image
Factorial Coding of Natural Images: How Effective are Linear Models in Removing Higher-Order Dependencies?

Bethge, M.

9, pages: 90, 9th T{\"u}bingen Perception Conference (TWK), March 2006 (poster)

Abstract
The performance of unsupervised learning models for natural images is evaluated quantitatively by means of information theory. We estimate the gain in statistical independence (the multi-information reduction) achieved with independent component analysis (ICA), principal component analysis (PCA), zero-phase whitening, and predictive coding. Predictive coding is translated into the transform coding framework, where it can be characterized by the constraint of a triangular filter matrix. A randomly sampled whitening basis and the Haar wavelet are included into the comparison as well. The comparison of all these methods is carried out for different patch sizes, ranging from 2x2 to 16x16 pixels. In spite of large differences in the shape of the basis functions, we find only small differences in the multi-information between all decorrelation transforms (5% or less) for all patch sizes. Among the second-order methods, PCA is optimal for small patch sizes and predictive coding performs best for large patch sizes. The extra gain achieved with ICA is always less than 2%. In conclusion, the `edge filters‘ found with ICA lead only to a surprisingly small improvement in terms of its actual objective.

ei

Web [BibTex]

Web [BibTex]


no image
Classification of natural scenes: critical features revisited

Drewes, J., Wichmann, F., Gegenfurtner, K.

Experimentelle Psychologie: Beitr{\"a}ge zur 48. Tagung experimentell arbeitender Psychologen, 48, pages: 251, 2006 (poster)

ei

[BibTex]

[BibTex]


no image
Texture and haptic cues in slant discrimination: combination is sensitive to reliability but not statistically optimal

Rosas, P., Wagemans, J., Ernst, M., Wichmann, F.

Beitr{\"a}ge zur 48. Tagung experimentell arbeitender Psychologen (TeaP 2006), 48, pages: 80, 2006 (poster)

ei

[BibTex]

[BibTex]


no image
Ähnlichkeitsmasse in Modellen zur Kategorienbildung

Jäkel, F., Wichmann, F.

Experimentelle Psychologie: Beitr{\"a}ge zur 48. Tagung experimentell arbeitender Psychologen, 48, pages: 223, 2006 (poster)

ei

[BibTex]

[BibTex]


no image
The pedestal effect is caused by off-frequency looking, not nonlinear transduction or contrast gain-control

Wichmann, F., Henning, B.

Experimentelle Psychologie: Beitr{\"a}ge zur 48. Tagung experimentell arbeitender Psychologen, 48, pages: 205, 2006 (poster)

ei

[BibTex]

[BibTex]