Header logo is


2017


no image
Improving performance of linear field generation with multi-coil setup by optimizing coils position

Aghaeifar, A., Loktyushin, A., Eschelbach, M., Scheffler, K.

Magnetic Resonance Materials in Physics, Biology and Medicine, 30(Supplement 1):S259, 34th Annual Scientific Meeting of the European Society for Magnetic Resonance in Medicine and Biology (ESMRMB), October 2017 (poster)

ei

link (url) DOI [BibTex]

2017


link (url) DOI [BibTex]


no image
Editorial for the Special Issue on Microdevices and Microsystems for Cell Manipulation

Hu, W., Ohta, A. T.

8, Multidisciplinary Digital Publishing Institute, September 2017 (misc)

pi

DOI [BibTex]

DOI [BibTex]


Thumb xl full outfit
Physical and Behavioral Factors Improve Robot Hug Quality

Block, A. E., Kuchenbecker, K. J.

Workshop Paper (2 pages) presented at the RO-MAN Workshop on Social Interaction and Multimodal Expression for Socially Intelligent Robots, Lisbon, Portugal, August 2017 (misc)

Abstract
A hug is one of the most basic ways humans can express affection. As hugs are so common, a natural progression of robot development is to have robots one day hug humans as seamlessly as these intimate human-human interactions occur. This project’s purpose is to evaluate human responses to different robot physical characteristics and hugging behaviors. Specifically, we aim to test the hypothesis that a warm, soft, touch-sensitive PR2 humanoid robot can provide humans with satisfying hugs by matching both their hugging pressure and their hugging duration. Thirty participants experienced and evaluated twelve hugs with the robot, divided into three randomly ordered trials that focused on physical robot char- acteristics and nine randomly ordered trials with varied hug pressure and duration. We found that people prefer soft, warm hugs over hard, cold hugs. Furthermore, users prefer hugs that physically squeeze them and release immediately when they are ready for the hug to end.

hi

Project Page [BibTex]

Project Page [BibTex]


Thumb xl bodytalk
Crowdshaping Realistic 3D Avatars with Words

Streuber, S., Ramirez, M. Q., Black, M., Zuffi, S., O’Toole, A., Hill, M. Q., Hahn, C. A.

August 2017, Application PCT/EP2017/051954 (misc)

Abstract
A method for generating a body shape, comprising the steps: - receiving one or more linguistic descriptors related to the body shape; - retrieving an association between the one or more linguistic descriptors and a body shape; and - generating the body shape, based on the association.

ps

Google Patents [BibTex]

Google Patents [BibTex]


no image
Asymptotic Normality of the Median Heuristic

Garreau, Damien

July 2017, preprint (unpublished)

link (url) [BibTex]

link (url) [BibTex]


no image
Physically Interactive Exercise Games with a Baxter Robot

Fitter, N. T., Kuchenbecker, K. J.

Hands-on demonstration presented at the IEEE World Haptics Conference (WHC), Munich, Germany, June 2017 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


no image
Proton Pack: Visuo-Haptic Surface Data Recording

Burka, A., Kuchenbecker, K. J.

Hands-on demonstration presented at the IEEE World Haptics Conference (WHC), Munich, Germany, June 2017 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


no image
Teaching a Robot to Collaborate with a Human Via Haptic Teleoperation

Hu, S., Kuchenbecker, K. J.

Work-in-progress paper (2 pages) presented at the IEEE World Haptics Conference (WHC), Munich, Germany, June 2017 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


Thumb xl full outfit
How Should Robots Hug?

Block, A. E., Kuchenbecker, K. J.

Work-in-progress paper (2 pages) presented at the IEEE World Haptics Conference (WHC), Munich, Germany, June 2017 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


no image
An Interactive Augmented-Reality Video Training Platform for the da Vinci Surgical System

Carlson, J., Kuchenbecker, K. J.

Workshop paper (3 pages) presented at the ICRA Workshop on C4 Surgical Robots, Singapore, May 2017 (misc)

Abstract
Teleoperated surgical robots such as the Intuitive da Vinci Surgical System facilitate minimally invasive surgeries, which decrease risk to patients. However, these systems can be difficult to learn, and existing training curricula on surgical simulators do not offer students the realistic experience of a full operation. This paper presents an augmented-reality video training platform for the da Vinci that will allow trainees to rehearse any surgery recorded by an expert. While the trainee operates a da Vinci in free space, they see their own instruments overlaid on the expert video. Tools are identified in the source videos via color segmentation and kernelized correlation filter tracking, and their depth is calculated from the da Vinci’s stereoscopic video feed. The user tries to follow the expert’s movements, and if any of their tools venture too far away, the system provides instantaneous visual feedback and pauses to allow the user to correct their motion. The trainee can also rewind the expert video by bringing either da Vinci tool very close to the camera. This combined and augmented video provides the user with an immersive and interactive training experience.

hi

[BibTex]

[BibTex]


no image
Estimating B0 inhomogeneities with projection FID navigator readouts

Loktyushin, A., Ehses, P., Schölkopf, B., Scheffler, K.

25th Annual Meeting and Exhibition of the International Society for Magnetic Resonance in Medicine (ISMRM), April 2017 (poster)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Image Quality Improvement by Applying Retrospective Motion Correction on Quantitative Susceptibility Mapping and R2*

Feng, X., Loktyushin, A., Deistung, A., Reichenbach, J.

25th Annual Meeting and Exhibition of the International Society for Magnetic Resonance in Medicine (ISMRM), April 2017 (poster)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Hand-Clapping Games with a Baxter Robot

Fitter, N. T., Kuchenbecker, K. J.

Hands-on demonstration presented at ACM/IEEE International Conference on Human-Robot Interaction (HRI), Vienna, Austria, March 2017 (misc)

Abstract
Robots that work alongside humans might be more effective if they could forge a strong social bond with their human partners. Hand-clapping games and other forms of rhythmic social-physical interaction may foster human-robot teamwork, but the design of such interactions has scarcely been explored. At the HRI 2017 conference, we will showcase several such interactions taken from our recent work with the Rethink Robotics Baxter Research Robot, including tempo-matching, Simon says, and Pat-a-cake-like games. We believe conference attendees will be both entertained and intrigued by this novel demonstration of social-physical HRI.

hi

Project Page [BibTex]

Project Page [BibTex]


no image
Automatic OSATS Rating of Trainee Skill at a Pediatric Laparoscopic Suturing Task

Oquendo, Y. A., Riddle, E. W., Hiller, D., Blinman, T. A., Kuchenbecker, K. J.

Surgical Endoscopy, 31(Supplement 1):S28, Extended abstract presented as a podium presentation at the Annual Meeting of the Society of American Gastrointestinal and Endoscopic Surgeons (SAGES), Springer, Houston, USA, March 2017 (misc)

Abstract
Introduction: Minimally invasive surgery has revolutionized surgical practice, but challenges remain. Trainees must acquire complex technical skills while minimizing patient risk, and surgeons must maintain their skills for rare procedures. These challenges are magnified in pediatric surgery due to the smaller spaces, finer tissue, and relative dearth of both inanimate and virtual simulators. To build technical expertise, trainees need opportunities for deliberate practice with specific performance feedback, which is typically provided via tedious human grading. This study aimed to validate a novel motion-tracking system and machine learning algorithm for automatically evaluating trainee performance on a pediatric laparoscopic suturing task using a 1–5 OSATS Overall Skill rating. Methods: Subjects (n=14) ranging from medical students to fellows per- formed one or two trials of an intracorporeal suturing task in a custom pediatric laparoscopy training box (Fig. 1) after watching a video of ideal performance by an expert. The position and orientation of the tools and endoscope were recorded over time using Ascension trakSTAR magnetic motion-tracking sensors, and both instrument grasp angles were recorded over time using flex sensors on the handles. The 27 trials were video-recorded and scored on the OSATS scale by a senior fellow; ratings ranged from 1 to 4. The raw motion data from each trial was processed to calculate over 200 preliminary motion parameters. Regularized least-squares regression (LASSO) was used to identify the most predictive parameters for inclusion in a regression tree. Model performance was evaluated by leave-one-subject-out cross validation, wherein the automatic scores given to each subject’s trials (by a model trained on all other data) are compared to the corresponding human rater scores. Results: The best-performing LASSO algorithm identified 14 predictive parameters for inclusion in the regression tree, including completion time, linear path length, angular path length, angular acceleration, grasp velocity, and grasp acceleration. The final model’s raw output showed a strong positive correlation of 0.87 with the reviewer-generated scores, and rounding the output to the nearest integer yielded a leave-one-subject-out cross-validation accuracy of 77.8%. Results are summarized in the confusion matrix (Table 1). Conclusions: Our novel motion-tracking system and regression model automatically gave previously unseen trials overall skill scores that closely match scores from an expert human rater. With additional data and further development, this system may enable creation of a motion-based training platform for pediatric laparoscopic surgery and could yield insights into the fundamental components of surgical skill.

hi

[BibTex]

[BibTex]


no image
How Much Haptic Surface Data is Enough?

Burka, A., Kuchenbecker, K. J.

Workshop paper (5 pages) presented at the AAAI Spring Symposium on Interactive Multi-Sensory Object Perception for Embodied Agents, Stanford, USA, March 2017 (misc)

Abstract
The Proton Pack is a portable visuo-haptic surface interaction recording device that will be used to collect a vast multimodal dataset, intended for robots to use as part of an approach to understanding the world around them. In order to collect a useful dataset, we want to pick a suitable interaction duration for each surface, noting the tradeoff between data collection resources and completeness of data. One interesting approach frames the data collection process as an online learning problem, building an incremental surface model and using that model to decide when there is enough data. Here we examine how to do such online surface modeling and when to stop collecting data, using kinetic friction as a first domain in which to apply online modeling.

hi

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


Thumb xl paraview preview
Design of a visualization scheme for functional connectivity data of Human Brain

Bramlage, L.

Hochschule Osnabrück - University of Applied Sciences, 2017 (thesis)

sf

Bramlage_BSc_2017.pdf [BibTex]


no image
Generalized phase locking analysis of electrophysiology data

Safavi, S., Panagiotaropoulos, T., Kapoor, V., Logothetis, N. K., Besserve, M.

ESI Systems Neuroscience Conference (ESI-SyNC 2017): Principles of Structural and Functional Connectivity, 2017 (poster)

ei

[BibTex]

[BibTex]

2015


no image
Diversity of sharp wave-ripples in the CA1 of the macaque hippocampus and their brain wide signatures

Ramirez-Villegas, J. F., Logothetis, N. K., Besserve, M.

45th Annual Meeting of the Society for Neuroscience (Neuroscience 2015), October 2015 (poster)

ei

link (url) [BibTex]

2015


link (url) [BibTex]


no image
Retrospective rigid motion correction of undersampled MRI data

Loktyushin, A., Babayeva, M., Gallichan, D., Krueger, G., Scheffler, K., Kober, T.

23rd Annual Meeting and Exhibition of the International Society for Magnetic Resonance in Medicine, ISMRM, June 2015 (poster)

ei

[BibTex]

[BibTex]


no image
Improving Quantitative Susceptibility and R2* Mapping by Applying Retrospective Motion Correction

Feng, X., Loktyushin, A., Deistung, A., Reichenbach, J. R.

23rd Annual Meeting and Exhibition of the International Society for Magnetic Resonance in Medicine, ISMRM, June 2015 (poster)

ei

[BibTex]

[BibTex]


no image
Increasing the sensitivity of Kepler to Earth-like exoplanets

Foreman-Mackey, D., Hogg, D., Schölkopf, B., Wang, D.

Workshop: 225th American Astronomical Society Meeting 2015 , pages: 105.01D, 2015 (poster)

ei

Web link (url) [BibTex]

Web link (url) [BibTex]


no image
Calibrating the pixel-level Kepler imaging data with a causal data-driven model

Wang, D., Foreman-Mackey, D., Hogg, D., Schölkopf, B.

Workshop: 225th American Astronomical Society Meeting 2015 , pages: 258.08, 2015 (poster)

ei

Web link (url) [BibTex]

Web link (url) [BibTex]


no image
Disparity estimation from a generative light field model

Köhler, R., Schölkopf, B., Hirsch, M.

IEEE International Conference on Computer Vision (ICCV 2015), Workshop on Inverse Rendering, 2015, Note: This work has been presented as a poster and is not included in the workshop proceedings. (poster)

ei

[BibTex]

[BibTex]


no image
Derivation of phenomenological expressions for transition matrix elements for electron-phonon scattering

Illg, C., Haag, M., Müller, B. Y., Czycholl, G., Fähnle, M.

2015 (misc)

mms

link (url) [BibTex]

2006


no image
Some observations on the pedestal effect or dipper function

Henning, B., Wichmann, F.

Journal of Vision, 6(13):50, 2006 Fall Vision Meeting of the Optical Society of America, December 2006 (poster)

Abstract
The pedestal effect is the large improvement in the detectabilty of a sinusoidal “signal” grating observed when the signal is added to a masking or “pedestal” grating of the same spatial frequency, orientation, and phase. We measured the pedestal effect in both broadband and notched noise - noise from which a 1.5-octave band centred on the signal frequency had been removed. Although the pedestal effect persists in broadband noise, it almost disappears in the notched noise. Furthermore, the pedestal effect is substantial when either high- or low-pass masking noise is used. We conclude that the pedestal effect in the absence of notched noise results principally from the use of information derived from channels with peak sensitivities at spatial frequencies different from that of the signal and pedestal. The spatial-frequency components of the notched noise above and below the spatial frequency of the signal and pedestal prevent the use of information about changes in contrast carried in channels tuned to spatial frequencies that are very much different from that of the signal and pedestal. Thus the pedestal or dipper effect measured without notched noise is not a characteristic of individual spatial-frequency tuned channels.

ei

Web DOI [BibTex]

2006


Web DOI [BibTex]


no image
Optimizing Spatial Filters for BCI: Margin- and Evidence-Maximization Approaches

Farquhar, J., Hill, N., Schölkopf, B.

Challenging Brain-Computer Interfaces: MAIA Workshop 2006, pages: 1, November 2006 (poster)

Abstract
We present easy-to-use alternatives to the often-used two-stage Common Spatial Pattern + classifier approach for spatial filtering and classification of Event-Related Desychnronization signals in BCI. We report two algorithms that aim to optimize the spatial filters according to a criterion more directly related to the ability of the algorithms to generalize to unseen data. Both are based upon the idea of treating the spatial filter coefficients as hyperparameters of a kernel or covariance function. We then optimize these hyper-parameters directly along side the normal classifier parameters with respect to our chosen learning objective function. The two objectives considered are margin maximization as used in Support-Vector Machines and the evidence maximization framework used in Gaussian Processes. Our experiments assessed generalization error as a function of the number of training points used, on 9 BCI competition data sets and 5 offline motor imagery data sets measured in Tubingen. Both our approaches sho w consistent improvements relative to the commonly used CSP+linear classifier combination. Strikingly, the improvement is most significant in the higher noise cases, when either few trails are used for training, or with the most poorly performing subjects. This a reversal of the usual "rich get richer" effect in the development of CSP extensions, which tend to perform best when the signal is strong enough to accurately find their additional parameters. This makes our approach particularly suitable for clinical application where high levels of noise are to be expected.

ei

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
Learning Eye Movements

Kienzle, W., Wichmann, F., Schölkopf, B., Franz, M.

Sensory Coding And The Natural Environment, 2006, pages: 1, September 2006 (poster)

Abstract
The human visual system samples images through saccadic eye movements which rapidly change the point of fixation. Although the selection of eye movement targets depends on numerous top-down mechanisms, a number of recent studies have shown that low-level image features such as local contrast or edges play an important role. These studies typically used predefined image features which were afterwards experimentally verified. Here, we follow a complementary approach: instead of testing a set of candidate image features, we infer these hypotheses from the data, using methods from statistical learning. To this end, we train a non-linear classifier on fixated vs. randomly selected image patches without making any physiological assumptions. The resulting classifier can be essentially characterized by a nonlinear combination of two center-surround receptive fields. We find that the prediction performance of this simple model on our eye movement data is indistinguishable from the physiologically motivated model of Itti & Koch (2000) which is far more complex. In particular, we obtain a comparable performance without using any multi-scale representations, long-range interactions or oriented image features.

ei

Web [BibTex]

Web [BibTex]


no image
Classification of natural scenes: Critical features revisited

Drewes, J., Wichmann, F., Gegenfurtner, K.

Journal of Vision, 6(6):561, 6th Annual Meeting of the Vision Sciences Society (VSS), June 2006 (poster)

Abstract
Human observers are capable of detecting animals within novel natural scenes with remarkable speed and accuracy. Despite the seeming complexity of such decisions it has been hypothesized that a simple global image feature, the relative abundance of high spatial frequencies at certain orientations, could underly such fast image classification (A. Torralba & A. Oliva, Network: Comput. Neural Syst., 2003). We successfully used linear discriminant analysis to classify a set of 11.000 images into “animal” and “non-animal” images based on their individual amplitude spectra only (Drewes, Wichmann, Gegenfurtner VSS 2005). We proceeded to sort the images based on the performance of our classifier, retaining only the best and worst classified 400 images (“best animals”, “best distractors” and “worst animals”, “worst distractors”). We used a Go/No-go paradigm to evaluate human performance on this subset of our images. Both reaction time and proportion of correctly classified images showed a significant effect of classification difficulty. Images more easily classified by our algorithm were also classified faster and better by humans, as predicted by the Torralba & Oliva hypothesis. We then equated the amplitude spectra of the 400 images, which, by design, reduced algorithmic performance to chance whereas human performance was only slightly reduced (cf. Wichmann, Rosas, Gegenfurtner, VSS 2005). Most importantly, the same images as before were still classified better and faster, suggesting that even in the original condition features other than specifics of the amplitude spectrum made particular images easy to classify, clearly at odds with the Torralba & Oliva hypothesis.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
The pedestal effect is caused by off-frequency looking, not nonlinear transduction or contrast gain-control

Wichmann, F., Henning, B.

Journal of Vision, 6(6):194, 6th Annual Meeting of the Vision Sciences Society (VSS), June 2006 (poster)

Abstract
The pedestal or dipper effect is the large improvement in the detectabilty of a sinusoidal grating observed when the signal is added to a pedestal or masking grating having the signal‘s spatial frequency, orientation, and phase. The effect is largest with pedestal contrasts just above the ‘threshold‘ in the absence of a pedestal. We measured the pedestal effect in both broadband and notched masking noise---noise from which a 1.5- octave band centered on the signal and pedestal frequency had been removed. The pedestal effect persists in broadband noise, but almost disappears with notched noise. The spatial-frequency components of the notched noise that lie above and below the spatial frequency of the signal and pedestal prevent the use of information about changes in contrast carried in channels tuned to spatial frequencies that are very much different from that of the signal and pedestal. We conclude that the pedestal effect in the absence of notched noise results principally from the use of information derived from channels with peak sensitivities at spatial frequencies that are different from that of the signal and pedestal. Thus the pedestal or dipper effect is not a characteristic of individual spatial-frequency tuned channels.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
The Pedestal Effect is Caused by Off-Frequency Looking, not Nonlinear Transduction or Contrast Gain-Control

Wichmann, F., Henning, G.

9, pages: 174, 9th T{\"u}bingen Perception Conference (TWK), March 2006 (poster)

Abstract
The pedestal or dipper effect is the large improvement in the detectability of a sinusoidal grating observed when the signal is added to a pedestal or masking grating having the signal‘s spatial frequency, orientation, and phase. The effect is largest with pedestal contrasts just above the ‘threshold’ in the absence of a pedestal. We measured the pedestal effect in both broadband and notched masking noise---noise from which a 1.5-octave band centered on the signal and pedestal frequency had been removed. The pedestal effect persists in broadband noise, but almost disappears with notched noise. The spatial-frequency components of the notched noise that lie above and below the spatial frequency of the signal and pedestal prevent the use of information about changes in contrast carried in channels tuned to spatial frequencies that are very much different from that of the signal and pedestal. We conclude that the pedestal effect in the absence of notched noise results principally from the use of information derived from channels with peak sensitivities at spatial frequencies that are different from that of the signal and pedestal. Thus the pedestal or dipper effect is not a characteristic of individual spatial-frequency tuned channels.

ei

Web [BibTex]

Web [BibTex]


no image
Classification of Natural Scenes: Critical Features Revisited

Drewes, J., Wichmann, F., Gegenfurtner, K.

9, pages: 92, 9th T{\"u}bingen Perception Conference (TWK), March 2006 (poster)

Abstract
Human observers are capable of detecting animals within novel natural scenes with remarkable speed and accuracy. Despite the seeming complexity of such decisions it has been hypothesized that a simple global image feature, the relative abundance of high spatial frequencies at certain orientations, could underly such fast image classification [1]. We successfully used linear discriminant analysis to classify a set of 11.000 images into “animal” and “non-animal” images based on their individual amplitude spectra only [2]. We proceeded to sort the images based on the performance of our classifier, retaining only the best and worst classified 400 images ("best animals", "best distractors" and "worst animals", "worst distractors"). We used a Go/No-go paradigm to evaluate human performance on this subset of our images. Both reaction time and proportion of correctly classified images showed a significant effect of classification difficulty. Images more easily classified by our algorithm were also classified faster and better by humans, as predicted by the Torralba & Oliva hypothesis. We then equated the amplitude spectra of the 400 images, which, by design, reduced algorithmic performance to chance whereas human performance was only slightly reduced [3]. Most importantly, the same images as before were still classified better and faster, suggesting that even in the original condition features other than specifics of the amplitude spectrum made particular images easy to classify, clearly at odds with the Torralba & Oliva hypothesis.

ei

Web [BibTex]

Web [BibTex]


no image
Factorial Coding of Natural Images: How Effective are Linear Models in Removing Higher-Order Dependencies?

Bethge, M.

9, pages: 90, 9th T{\"u}bingen Perception Conference (TWK), March 2006 (poster)

Abstract
The performance of unsupervised learning models for natural images is evaluated quantitatively by means of information theory. We estimate the gain in statistical independence (the multi-information reduction) achieved with independent component analysis (ICA), principal component analysis (PCA), zero-phase whitening, and predictive coding. Predictive coding is translated into the transform coding framework, where it can be characterized by the constraint of a triangular filter matrix. A randomly sampled whitening basis and the Haar wavelet are included into the comparison as well. The comparison of all these methods is carried out for different patch sizes, ranging from 2x2 to 16x16 pixels. In spite of large differences in the shape of the basis functions, we find only small differences in the multi-information between all decorrelation transforms (5% or less) for all patch sizes. Among the second-order methods, PCA is optimal for small patch sizes and predictive coding performs best for large patch sizes. The extra gain achieved with ICA is always less than 2%. In conclusion, the `edge filters‘ found with ICA lead only to a surprisingly small improvement in terms of its actual objective.

ei

Web [BibTex]

Web [BibTex]


no image
Classification of natural scenes: critical features revisited

Drewes, J., Wichmann, F., Gegenfurtner, K.

Experimentelle Psychologie: Beitr{\"a}ge zur 48. Tagung experimentell arbeitender Psychologen, 48, pages: 251, 2006 (poster)

ei

[BibTex]

[BibTex]


no image
Texture and haptic cues in slant discrimination: combination is sensitive to reliability but not statistically optimal

Rosas, P., Wagemans, J., Ernst, M., Wichmann, F.

Beitr{\"a}ge zur 48. Tagung experimentell arbeitender Psychologen (TeaP 2006), 48, pages: 80, 2006 (poster)

ei

[BibTex]

[BibTex]


no image
Ähnlichkeitsmasse in Modellen zur Kategorienbildung

Jäkel, F., Wichmann, F.

Experimentelle Psychologie: Beitr{\"a}ge zur 48. Tagung experimentell arbeitender Psychologen, 48, pages: 223, 2006 (poster)

ei

[BibTex]

[BibTex]


no image
The pedestal effect is caused by off-frequency looking, not nonlinear transduction or contrast gain-control

Wichmann, F., Henning, B.

Experimentelle Psychologie: Beitr{\"a}ge zur 48. Tagung experimentell arbeitender Psychologen, 48, pages: 205, 2006 (poster)

ei

[BibTex]

[BibTex]

2004


no image
S-cones contribute to flicker brightness in human vision

Wehrhahn, C., Hill, NJ., Dillenburger, B.

34(174.12), 34th Annual Meeting of the Society for Neuroscience (Neuroscience), October 2004 (poster)

Abstract
In the retina of primates three cone types sensitive to short, middle and long wavelengths of light convert photons into electrical signals. Many investigators have presented evidence that, in color normal observers, the signals of cones sensitive to short wavelengths of light (S-cones) do not contribute to the perception of brightness of a colored surface when this is alternated with an achromatic reference (flicker brightness). Other studies indicate that humans do use S-cone signals when performing this task. Common to all these studies is the small number of observers, whose performance data are reported. Considerable variability in the occurrence of cone types across observers has been found, but, to our knowledge, no cone counts exist from larger populations of humans. We reinvestigated how much the S-cones contribute to flicker brightness. 76 color normal observers were tested in a simple psychophysical procedure neutral to the cone type occurence (Teufel & Wehrhahn (2000), JOSA A 17: 994 - 1006). The data show that, in the majority of our observers, S-cones provide input with a negative sign - relative to L- and M-cone contribution - in the task in question. There is indeed considerable between-subject variability such that for 20 out of 76 observers the magnitude of this input does not differ significantly from 0. Finally, we argue that the sign of S-cone contribution to flicker brightness perception by an observer cannot be used to infer the relative sign their contributions to the neuronal signals carrying the information leading to the perception of flicker brightness. We conclude that studies which use only a small number of observers may easily fail to find significant evidence for the small but significant population tendency for the S-cones to contribute to flicker brightness. Our results confirm all earlier results and reconcile their contradictory interpretations.

ei

Web [BibTex]

2004


Web [BibTex]


no image
Human Classification Behaviour Revisited by Machine Learning

Graf, A., Wichmann, F., Bülthoff, H., Schölkopf, B.

7, pages: 134, (Editors: Bülthoff, H.H., H.A. Mallot, R. Ulrich and F.A. Wichmann), 7th T{\"u}bingen Perception Conference (TWK), Febuary 2004 (poster)

Abstract
We attempt to understand visual classication in humans using both psychophysical and machine learning techniques. Frontal views of human faces were used for a gender classication task. Human subjects classied the faces and their gender judgment, reaction time (RT) and condence rating (CR) were recorded for each face. RTs are longer for incorrect answers than for correct ones, high CRs are correlated with low classication errors and RTs decrease as the CRs increase. This results suggest that patterns difcult to classify need more computation by the brain than patterns easy to classify. Hyperplane learning algorithms such as Support Vector Machines (SVM), Relevance Vector Machines (RVM), Prototype learners (Prot) and K-means learners (Kmean) were used on the same classication task using the Principal Components of the texture and oweld representation of the faces. The classication performance of the learning algorithms was estimated using the face database with the true gender of the faces as labels, and also with the gender estimated by the subjects. Kmean yield a classication performance close to humans while SVM and RVM are much better. This surprising behaviour may be due to the fact that humans are trained on real faces during their lifetime while they were here tested on articial ones, while the algorithms were trained and tested on the same set of stimuli. We then correlated the human responses to the distance of the stimuli to the separating hyperplane (SH) of the learning algorithms. On the whole stimuli far from the SH are classied more accurately, faster and with higher condence than those near to the SH if we pool data across all our subjects and stimuli. We also nd three noteworthy results. First, SVMs and RVMs can learn to classify faces using the subjects' labels but perform much better when using the true labels. Second, correlating the average response of humans (classication error, RT or CR) with the distance to the SH on a face-by-face basis using Spearman's rank correlation coefcients shows that RVMs recreate human performance most closely in every respect. Third, the mean-of-class prototype, its popularity in neuroscience notwithstanding, is the least human-like classier in all cases examined.

ei

Web [BibTex]

Web [BibTex]


no image
m-Alternative-Forced-Choice: Improving the Efficiency of the Method of Constant Stimuli

Jäkel, F., Hill, J., Wichmann, F.

7, pages: 118, 7th T{\"u}bingen Perception Conference (TWK), February 2004 (poster)

Abstract
We explored several ways to improve the efficiency of measuring psychometric functions without resorting to adaptive procedures. a) The number m of alternatives in an m-alternative-forced-choice (m-AFC) task improves the efficiency of the method of constant stimuli. b) When alternatives are presented simultaneously on different positions on a screen rather than sequentially time can be saved and memory load for the subject can be reduced. c) A touch-screen can further help to make the experimental procedure more intuitive. We tested these ideas in the measurement of contrast sensitivity and compared them to results obtained by sequential presentation in two-interval-forced-choice (2-IFC). Qualitatively all methods (m-AFC and 2-IFC) recovered the characterictic shape of the contrast sensitivity function in three subjects. The m-AFC paradigm only took about 60% of the time of the 2-IFC task. We tried m=2,4,8 and found 4-AFC to give the best model fits and 2-AFC to have the least bias.

ei

Web [BibTex]

Web [BibTex]


no image
Efficient Approximations for Support Vector Classifiers

Kienzle, W., Franz, M.

7, pages: 68, 7th T{\"u}bingen Perception Conference (TWK), February 2004 (poster)

Abstract
In face detection, support vector machines (SVM) and neural networks (NN) have been shown to outperform most other classication methods. While both approaches are learning-based, there are distinct advantages and drawbacks to each method: NNs are difcult to design and train but can lead to very small and efcient classiers. In comparison, SVM model selection and training is rather straightforward, and, more importantly, guaranteed to converge to a globally optimal (in the sense of training errors) solution. Unfortunately, SVM classiers tend to have large representations which are inappropriate for time-critical image processing applications. In this work, we examine various existing and new methods for simplifying support vector decision rules. Our goal is to obtain efcient classiers (as with NNs) while keeping the numerical and statistical advantages of SVMs. For a given SVM solution, we compute a cascade of approximations with increasing complexities. Each classier is tuned so that the detection rate is near 100%. At run-time, the rst (simplest) detector is evaluated on the whole image. Then, any subsequent classier is applied only to those positions that have been classied as positive throughout all previous stages. The false positive rate at the end equals that of the last (i.e. most complex) detector. In contrast, since many image positions are discarded by lower-complexity classiers, the average computation time per patch decreases signicantly compared to the time needed for evaluating the highest-complexity classier alone.

ei

Web [BibTex]

Web [BibTex]


no image
Selective Attention to Auditory Stimuli: A Brain-Computer Interface Paradigm

Hill, N., Lal, T., Schröder, M., Hinterberger, T., Birbaumer, N., Schölkopf, B.

7, pages: 102, (Editors: Bülthoff, H.H., H.A. Mallot, R. Ulrich and F.A. Wichmann), 7th T{\"u}bingen Perception Conference (TWK), February 2004 (poster)

Abstract
During the last 20 years several paradigms for Brain Computer Interfaces have been proposed— see [1] for a recent review. They can be divided into (a) stimulus-driven paradigms, using e.g. event-related potentials or visual evoked potentials from an EEG signal, and (b) patient-driven paradigms such as those that use premotor potentials correlated with imagined action, or slow cortical potentials (e.g. [2]). Our aim is to develop a stimulus-driven paradigm that is applicable in practice to patients. Due to the unreliability of visual perception in “locked-in” patients in the later stages of disorders such as Amyotrophic Lateral Sclerosis, we concentrate on the auditory modality. Speci- cally, we look for the effects, in the EEG signal, of selective attention to one of two concurrent auditory stimulus streams, exploiting the increased activation to attended stimuli that is seen under some circumstances [3]. We present the results of our preliminary experiments on normal subjects. On each of 400 trials, two repetitive stimuli (sequences of drum-beats or other pulsed stimuli) could be heard simultaneously. The two stimuli were distinguishable from one another by their acoustic properties, by their source location (one from a speaker to the left of the subject, the other from the right), and by their differing periodicities. A visual cue preceded the stimulus by 500 msec, indicating which of the two stimuli to attend to, and the subject was instructed to count the beats in the attended stimulus stream. There were up to 6 beats of each stimulus: with equal probability on each trial, all 6 were played, or the fourth was omitted, or the fth was omitted. The 40-channel EEG signals were analyzed ofine to reconstruct which of the streams was attended on each trial. A linear Support Vector Machine [4] was trained on a random subset of the data and tested on the remainder. Results are compared from two types of pre-processing of the signal: for each stimulus stream, (a) EEG signals at the stream's beat periodicity are emphasized, or (b) EEG signals following beats are contrasted with those following missing beats. Both forms of pre-processing show promising results, i.e. that selective attention to one or the other auditory stream yields signals that are classiable signicantly above chance performance. In particular, the second pre-processing was found to be robust to reduction in the number of features used for classication (cf. [5]), helping us to eliminate noise.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Texture and Haptic Cues in Slant Discrimination: Measuring the Effect of Texture Type

Rosas, P., Wichmann, F., Ernst, M., Wagemans, J.

7, pages: 165, (Editors: Bülthoff, H. H., H. A. Mallot, R. Ulrich, F. A. Wichmann), 7th T{\"u}bingen Perception Conference (TWK), February 2004 (poster)

Abstract
In a number of models of depth cue combination the depth percept is constructed via a weighted average combination of independent depth estimations. The inuence of each cue in such average depends on the reliability of the source of information [1,5]. In particular, Ernst and Banks (2002) formulate such combination as that of the minimum variance unbiased estimator that can be constructed from the available cues. We have observed systematic differences in slant discrimination performance of human observers when different types of textures were used as cue to slant [4]. If the depth percept behaves as described above, our measurements of the slopes of the psychometric functions provide the predicted weights for the texture cue for the ranked texture types. However, the results for slant discrimination obtained when combining these texture types with object motion results are difcult to reconcile with the minimum variance unbiased estimator model [3]. This apparent failure of such model might be explained by the existence of a coupling of texture and motion, violating the assumption of independence of cues. Hillis, Ernst, Banks, and Landy (2002) [2] have shown that while for between-modality combination the human visual system has access to the single-cue information, for withinmodality combination (visual cues) the single-cue information is lost. This suggests a coupling between visual cues and independence between visual and haptic cues. Then, in the present study we combined the different texture types with haptic information in a slant discrimination task, to test whether in the between-modality condition these cues are combined as predicted by an unbiased, minimum variance estimator model. The measured weights for the cues were consistent with a combination rule sensitive to the reliability of the sources of information, but did not match the predictions of a statistically optimal combination.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Efficient Approximations for Support Vector Classiers

Kienzle, W., Franz, M.

7, pages: 68, 7th T{\"u}bingen Perception Conference (TWK), February 2004 (poster)

Abstract
In face detection, support vector machines (SVM) and neural networks (NN) have been shown to outperform most other classication methods. While both approaches are learning-based, there are distinct advantages and drawbacks to each method: NNs are difcult to design and train but can lead to very small and efcient classiers. In comparison, SVM model selection and training is rather straightforward, and, more importantly, guaranteed to converge to a globally optimal (in the sense of training errors) solution. Unfortunately, SVM classiers tend to have large representations which are inappropriate for time-critical image processing applications. In this work, we examine various existing and new methods for simplifying support vector decision rules. Our goal is to obtain efcient classiers (as with NNs) while keeping the numerical and statistical advantages of SVMs. For a given SVM solution, we compute a cascade of approximations with increasing complexities. Each classier is tuned so that the detection rate is near 100%. At run-time, the rst (simplest) detector is evaluated on the whole image. Then, any subsequent classier is applied only to those positions that have been classied as positive throughout all previous stages. The false positive rate at the end equals that of the last (i.e. most complex) detector. In contrast, since many image positions are discarded by lower-complexity classiers, the average computation time per patch decreases signicantly compared to the time needed for evaluating the highest-complexity classier alone.

ei

Web [BibTex]

Web [BibTex]


no image
EEG Channel Selection for Brain Computer Interface Systems Based on Support Vector Methods

Schröder, M., Lal, T., Bogdan, M., Schölkopf, B.

7, pages: 50, (Editors: Bülthoff, H.H., H.A. Mallot, R. Ulrich and F.A. Wichmann), 7th T{\"u}bingen Perception Conference (TWK), February 2004 (poster)

Abstract
A Brain Computer Interface (BCI) system allows the direct interpretation of brain activity patterns (e.g. EEG signals) by a computer. Typical BCI applications comprise spelling aids or environmental control systems supporting paralyzed patients that have lost motor control completely. The design of an EEG based BCI system requires good answers for the problem of selecting useful features during the performance of a mental task as well as for the problem of classifying these features. For the special case of choosing appropriate EEG channels from several available channels, we propose the application of variants of the Support Vector Machine (SVM) for both problems. Although these algorithms do not rely on prior knowledge they can provide more accurate solutions than standard lter methods [1] for feature selection which usually incorporate prior knowledge about neural activity patterns during the performed mental tasks. For judging the importance of features we introduce a new relevance measure and apply it to EEG channels. Although we base the relevance measure for this purpose on the previously introduced algorithms, it does in general not depend on specic algorithms but can be derived using arbitrary combinations of feature selectors and classifiers.

ei

Web [BibTex]

Web [BibTex]


no image
Learning Depth

Sinz, F., Franz, MO.

pages: 69, (Editors: H.H.Bülthoff, H.A.Mallot, R.Ulrich,F.A.Wichmann), 7th T{\"u}bingen Perception Conference (TWK), February 2004 (poster)

Abstract
The depth of a point in space can be estimated by observing its image position from two different viewpoints. The classical approach to stereo vision calculates depth from the two projection equations which together form a stereocamera model. An unavoidable preparatory work for this solution is a calibration procedure, i.e., estimating the external (position and orientation) and internal (focal length, lens distortions etc.) parameters of each camera from a set of points with known spatial position and their corresponding image positions. This is normally done by iteratively linearizing the single camera models and reestimating their parameters according to the error on the known datapoints. The advantage of the classical method is the maximal usage of prior knowledge about the underlying physical processes and the explicit estimation of meaningful model parameters such as focal length or camera position in space. However, the approach neglects the nonlinear nature of the problem such that the results critically depend on the choice of the initial values for the parameters. In this study, we approach the depth estimation problem from a different point of view by applying generic machine learning algorithms to learn the mapping from image coordinates to spatial position. These algorithms do not require any domain knowledge and are able to learn nonlinear functions by mapping the inputs into a higher-dimensional space. Compared to classical calibration, machine learning methods give a direct solution to the depth estimation problem which means that the values of the stereocamera parameters cannot be extracted from the learned mapping. On the poster, we compare the performance of classical camera calibration to that of different machine learning algorithms such as kernel ridge regression, gaussian processes and support vector regression. Our results indicate that generic learning approaches can lead to higher depth accuracies than classical calibration although no domain knowledge is used.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Neural mechanisms underlying control of a Brain-Computer-Interface (BCI): Simultaneous recording of bold-response and EEG

Hinterberger, T., Wilhelm, B., Veit, R., Weiskopf, N., Lal, TN., Birbaumer, N.

2004 (poster)

Abstract
Brain computer interfaces (BCI) enable humans or animals to communicate or activate external devices without muscle activity using electric brain signals. The BCI Thought Translation Device (TTD) uses learned regulation of slow cortical potentials (SCPs), a skill most people and paralyzed patients can acquire with training periods of several hours up to months. The neurophysiological mechanisms and anatomical sources of SCPs and other event-related brain macro-potentials are well understood, but the neural mechanisms underlying learning of the self-regulation skill for BCI-use are unknown. To uncover the relevant areas of brain activation during regulation of SCPs, the TTD was combined with functional MRI and EEG was recorded inside the MRI scanner in twelve healthy participants who have learned to regulate their SCP with feedback and reinforcement. The results demonstrate activation of specific brain areas during execution of the brain regulation skill: successf! ul control of cortical positivity allowing a person to activate an external device was closely related to an increase of BOLD (blood oxygen level dependent) response in the basal ganglia and frontal premotor deactivation indicating learned regulation of a cortical-striatal loop responsible for local excitation thresholds of cortical assemblies. The data suggest that human users of a BCI learn the regulation of cortical excitation thresholds of large neuronal assemblies as a prerequisite of direct brain communication: the learning of this skill depends critically on an intact and flexible interaction between these cortico-basal ganglia-circuits. Supported by the Deutsche Forschungsgemeinschaft (DFG) and the National Institute of Health (NIH).

ei

[BibTex]

[BibTex]