Header logo is


2019


no image
Semi-supervised learning, causality, and the conditional cluster assumption

von Kügelgen, J., Mey, A., Loog, M., Schölkopf, B.

NeurIPS 2019 Workshop “Do the right thing”: machine learning and causal inference for improved decision making, December 2019 (poster) Accepted

ei

link (url) [BibTex]

2019


link (url) [BibTex]


no image
Optimal experimental design via Bayesian optimization: active causal structure learning for Gaussian process networks

von Kügelgen, J., Rubenstein, P., Schölkopf, B., Weller, A.

NeurIPS 2019 Workshop “Do the right thing”: machine learning and causal inference for improved decision making, December 2019 (poster) Accepted

ei

link (url) [BibTex]

link (url) [BibTex]


Thumb xl cover walking seq
AirCap – Aerial Outdoor Motion Capture

Ahmad, A., Price, E., Tallamraju, R., Saini, N., Lawless, G., Ludwig, R., Martinovic, I., Bülthoff, H. H., Black, M. J.

IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019), Workshop on Aerial Swarms, November 2019 (misc)

Abstract
This paper presents an overview of the Grassroots project Aerial Outdoor Motion Capture (AirCap) running at the Max Planck Institute for Intelligent Systems. AirCap's goal is to achieve markerless, unconstrained, human motion capture (mocap) in unknown and unstructured outdoor environments. To that end, we have developed an autonomous flying motion capture system using a team of aerial vehicles (MAVs) with only on-board, monocular RGB cameras. We have conducted several real robot experiments involving up to 3 aerial vehicles autonomously tracking and following a person in several challenging scenarios using our approach of active cooperative perception developed in AirCap. Using the images captured by these robots during the experiments, we have demonstrated a successful offline body pose and shape estimation with sufficiently high accuracy. Overall, we have demonstrated the first fully autonomous flying motion capture system involving multiple robots for outdoor scenarios.

ps

[BibTex]

[BibTex]


Thumb xl mosh heroes icon
Method for providing a three dimensional body model

Loper, M., Mahmood, N., Black, M.

September 2019, U.S.~Patent 10,417,818 (misc)

Abstract
A method for providing a three-dimensional body model which may be applied for an animation, based on a moving body, wherein the method comprises providing a parametric three-dimensional body model, which allows shape and pose variations; applying a standard set of body markers; optimizing the set of body markers by generating an additional set of body markers and applying the same for providing 3D coordinate marker signals for capturing shape and pose of the body and dynamics of soft tissue; and automatically providing an animation by processing the 3D coordinate marker signals in order to provide a personalized three-dimensional body model, based on estimated shape and an estimated pose of the body by means of predicted marker locations.

ps

MoSh Project pdf [BibTex]


no image
High-Fidelity Multiphysics Finite Element Modeling of Finger-Surface Interactions with Tactile Feedback

Serhat, G., Kuchenbecker, K. J.

Work-in-progress paper (2 pages) presented at the IEEE World Haptics Conference (WHC), Tokyo, Japan, July 2019 (misc)

Abstract
In this study, we develop a high-fidelity finite element (FE) analysis framework that enables multiphysics simulation of the human finger in contact with a surface that is providing tactile feedback. We aim to elucidate a variety of physical interactions that can occur at finger-surface interfaces, including contact, friction, vibration, and electrovibration. We also develop novel FE-based methods that will allow prediction of nonconventional features such as real finger-surface contact area and finger stickiness. We envision using the developed computational tools for efficient design and optimization of haptic devices by replacing expensive and lengthy experimental procedures with high-fidelity simulation.

hi

[BibTex]

[BibTex]


no image
Fingertip Friction Enhances Perception of Normal Force Changes

Gueorguiev, D., Lambert, J., Thonnard, J., Kuchenbecker, K. J.

Work-in-progress paper (2 pages) presented at the IEEE World Haptics Conference (WHC), Tokyo, Japan, July 2019 (misc)

Abstract
Using a force-controlled robotic platform, we tested the human perception of positive and negative modulations in normal force during passive dynamic touch, which also induced a strong related change in the finger-surface lateral force. In a two-alternative forced-choice task, eleven participants had to detect brief variations in the normal force compared to a constant controlled pre-stimulation force of 1 N and report whether it had increased or decreased. The average 75% just noticeable difference (JND) was found to be around 0.25 N for detecting the peak change and 0.30 N for correctly reporting the increase or the decrease. Interestingly, the friction coefficient of a subject’s fingertip positively correlated with his or her performance at detecting the change and reporting its direction, which suggests that humans may use the lateral force as a sensory cue to perceive variations in the normal force.

hi

[BibTex]

[BibTex]


Thumb xl pocketrendering
Inflatable Haptic Sensor for the Torso of a Hugging Robot

Block, A. E., Kuchenbecker, K. J.

Work-in-progress paper (2 pages) presented at the IEEE World Haptics Conference (WHC), Tokyo, Japan, July 2019 (misc)

Abstract
During hugs, humans naturally provide and intuit subtle non-verbal cues that signify the strength and duration of an exchanged hug. Personal preferences for this close interaction may vary greatly between people; robots do not currently have the abilities to perceive or understand these preferences. This work-in-progress paper discusses designing, building, and testing a novel inflatable torso that can simultaneously soften a robot and act as a tactile sensor to enable more natural and responsive hugging. Using PVC vinyl, a microphone, and a barometric pressure sensor, we created a small test chamber to demonstrate a proof of concept for the full torso. While contacting the chamber in several ways common in hugs (pat, squeeze, scratch, and rub), we recorded data from the two sensors. The preliminary results suggest that the complementary haptic sensing channels allow us to detect coarse and fine contacts typically experienced during hugs, regardless of user hand placement.

hi

Project Page [BibTex]

Project Page [BibTex]


Thumb xl figure1
Understanding the Pull-off Force of the Human Fingerpad

Nam, S., Kuchenbecker, K. J.

Work-in-progress paper (2 pages) presented at the IEEE World Haptics Conference (WHC), Tokyo, Japan, July 2019 (misc)

Abstract
To understand the adhesive force that occurs when a finger pulls off of a smooth surface, we built an apparatus to measure the fingerpad’s moisture, normal force, and real contact area over time during interactions with a glass plate. We recorded a total of 450 trials (45 interactions by each of ten human subjects), capturing a wide range of values across the aforementioned variables. The experimental results showed that the pull-off force increases with larger finger contact area and faster detachment rate. Additionally, moisture generally increases the contact area of the finger, but too much moisture can restrict the increase in the pull-off force.

hi

[BibTex]

[BibTex]


Thumb xl h a image3
The Haptician and the Alphamonsters

Forte, M. P., L’Orsa, R., Mohan, M., Nam, S., Kuchenbecker, K. J.

Student Innovation Challenge on Implementing Haptics in Virtual Reality Environment presented at the IEEE World Haptics Conference, Tokyo, Japan, July 2019, Maria Paola Forte, Rachael L'Orsa, Mayumi Mohan, and Saekwang Nam contributed equally to this publication (misc)

Abstract
Dysgraphia is a neurological disorder characterized by writing disabilities that affects between 7% and 15% of children. It presents itself in the form of unfinished letters, letter distortion, inconsistent letter size, letter collision, etc. Traditional therapeutic exercises require continuous assistance from teachers or occupational therapists. Autonomous partial or full haptic guidance can produce positive results, but children often become bored with the repetitive nature of such activities. Conversely, virtual rehabilitation with video games represents a new frontier for occupational therapy due to its highly motivational nature. Virtual reality (VR) adds an element of novelty and entertainment to therapy, thus motivating players to perform exercises more regularly. We propose leveraging the HTC VIVE Pro and the EXOS Wrist DK2 to create an immersive spellcasting “exergame” (exercise game) that helps motivate children with dysgraphia to improve writing fluency.

hi

Student Innovation Challenge – Virtual Reality [BibTex]

Student Innovation Challenge – Virtual Reality [BibTex]


Thumb xl s ban outdoors 1   small
Explorations of Shape-Changing Haptic Interfaces for Blind and Sighted Pedestrian Navigation

Spiers, A., Kuchenbecker, K. J.

pages: 6, Workshop paper (6 pages) presented at the CHI 2019 Workshop on Hacking Blind Navigation, May 2019 (misc) Accepted

Abstract
Since the 1960s, technologists have worked to develop systems that facilitate independent navigation by vision-impaired (VI) pedestrians. These devices vary in terms of conveyed information and feedback modality. Unfortunately, many such prototypes never progress beyond laboratory testing. Conversely, smartphone-based navigation systems for sighted pedestrians have grown in robustness and capabilities, to the point of now being ubiquitous. How can we leverage the success of sighted navigation technology, which is driven by a larger global market, as a way to progress VI navigation systems? We believe one possibility is to make common devices that benefit both VI and sighted individuals, by providing information in a way that does not distract either user from their tasks or environment. To this end we have developed physical interfaces that eschew visual, audio or vibratory feedback, instead relying on the natural human ability to perceive the shape of a handheld object.

hi

[BibTex]

[BibTex]


no image
Bimanual Wrist-Squeezing Haptic Feedback Changes Speed-Force Tradeoff in Robotic Surgery Training

Cao, E., Machaca, S., Bernard, T., Wolfinger, B., Patterson, Z., Chi, A., Adrales, G. L., Kuchenbecker, K. J., Brown, J. D.

Extended abstract presented as an ePoster at the Annual Meeting of the Society of American Gastrointestinal and Endoscopic Surgeons (SAGES), Baltimore, USA, April 2019 (misc) Accepted

hi

[BibTex]

[BibTex]


no image
Interactive Augmented Reality for Robot-Assisted Surgery

Forte, M. P., Kuchenbecker, K. J.

Extended abstract presented as an Emerging Technology ePoster at the Annual Meeting of the Society of American Gastrointestinal and Endoscopic Surgeons (SAGES), Baltimore, Maryland, USA, April 2019 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


no image
Demo Abstract: Fast Feedback Control and Coordination with Mode Changes for Wireless Cyber-Physical Systems

(Best Demo Award)

Mager, F., Baumann, D., Jacob, R., Thiele, L., Trimpe, S., Zimmerling, M.

Proceedings of the 18th ACM/IEEE Conference on Information Processing in Sensor Networks (IPSN), pages: 340-341, 18th ACM/IEEE Conference on Information Processing in Sensor Networks (IPSN), April 2019 (poster)

ics

arXiv PDF DOI [BibTex]

arXiv PDF DOI [BibTex]


no image
A Design Tool for Therapeutic Social-Physical Human-Robot Interactions

Mohan, M., Kuchenbecker, K. J.

Workshop paper (3 pages) presented at the HRI Pioneers Workshop, Daegu, South Korea, March 2019 (misc) Accepted

Abstract
We live in an aging society; social-physical human-robot interaction has the potential to keep our elderly adults healthy by motivating them to exercise. After summarizing prior work, this paper proposes a tool that can be used to design exercise and therapy interactions to be performed by an upper-body humanoid robot. The interaction design tool comprises a teleoperation system that transmits the operator’s arm motions, head motions and facial expression along with an interface to monitor and assess the motion of the user interacting with the robot. We plan to use this platform to create dynamic and intuitive exercise interactions.

hi

Project Page [BibTex]

Project Page [BibTex]


Thumb xl webteaser
Perceiving Systems (2016-2018)
Scientific Advisory Board Report, 2019 (misc)

ps

pdf [BibTex]

pdf [BibTex]


no image
More Powerful Selective Kernel Tests for Feature Selection

Lim, J. N., Yamada, M., Jitkrittum, W., Terada, Y., Matsui, S., Shimodaira, H.

2019 (misc) Submitted

ei

arXiv [BibTex]

arXiv [BibTex]


Thumb xl teaser
Toward Expert-Sourcing of a Haptic Device Repository

Seifi, H., Ip, J., Agrawal, A., Kuchenbecker, K. J., MacLean, K. E.

Glasgow, UK, 2019 (misc)

Abstract
Haptipedia is an online taxonomy, database, and visualization that aims to accelerate ideation of new haptic devices and interactions in human-computer interaction, virtual reality, haptics, and robotics. The current version of Haptipedia (105 devices) was created through iterative design, data entry, and evaluation by our team of experts. Next, we aim to greatly increase the number of devices and keep Haptipedia updated by soliciting data entry and verification from haptics experts worldwide.

hi

link (url) [BibTex]

link (url) [BibTex]


no image
Perception of temporal dependencies in autoregressive motion

Meding, K., Schölkopf, B., Wichmann, F. A.

European Conference on Visual Perception (ECVP), 2019 (poster)

ei

[BibTex]

[BibTex]


no image
A special issue on hydrogen-based Energy storage

Hirscher, M.

{International Journal of Hydrogen Energy}, 44, pages: 7737, Elsevier, Amsterdam, 2019 (misc)

mms

DOI [BibTex]

DOI [BibTex]


no image
Nanoscale X-ray imaging of spin dynamics in Yttrium iron garnet

Förster, J., Wintz, S., Bailey, J., Finizio, S., Josten, E., Meertens, D., Dubs, C., Bozhko, D. A., Stoll, H., Dieterle, G., Traeger, N., Raabe, J., Slavin, A. N., Weigand, M., Gräfe, J., Schütz, G.

2019 (misc)

mms

link (url) [BibTex]

link (url) [BibTex]


no image
Reconfigurable nanoscale spin wave majority gate with frequency-division multiplexing

Talmelli, G., Devolder, T., Träger, N., Förster, J., Wintz, S., Weigand, M., Stoll, H., Heyns, M., Schütz, G., Radu, I., Gräfe, J., Ciubotaru, F., Adelmann, C.

2019 (misc)

Abstract
Spin waves are excitations in ferromagnetic media that have been proposed as information carriers in spintronic devices with potentially much lower operation power than conventional charge-based electronics. The wave nature of spin waves can be exploited to design majority gates by coding information in their phase and using interference for computation. However, a scalable spin wave majority gate design that can be co-integrated alongside conventional Si-based electronics is still lacking. Here, we demonstrate a reconfigurable nanoscale inline spin wave majority gate with ultrasmall footprint, frequency-division multiplexing, and fan-out. Time-resolved imaging of the magnetisation dynamics by scanning transmission x-ray microscopy reveals the operation mode of the device and validates the full logic majority truth table. All-electrical spin wave spectroscopy further demonstrates spin wave majority gates with sub-micron dimensions, sub-micron spin wave wavelengths, and reconfigurable input and output ports. We also show that interference-based computation allows for frequency-division multiplexing as well as the computation of different logic functions in the same device. Such devices can thus form the foundation of a future spin-wave-based superscalar vector computing platform.

mms

link (url) [BibTex]

link (url) [BibTex]


no image
Visual-Inertial Mapping with Non-Linear Factor Recovery

Usenko, V., Demmel, N., Schubert, D., Stückler, J., Cremers, D.

2019, arXiv:1904.06504 (misc)

ev

[BibTex]

[BibTex]


no image
Phenomenal Causality and Sensory Realism

Bruijns, S. A., Meding, K., Schölkopf, B., Wichmann, F. A.

European Conference on Visual Perception (ECVP), 2019 (poster)

ei

[BibTex]

[BibTex]


no image
Hydrogen Energy

Hirscher, M., Autrey, T., Orimo, S.

{ChemPhysChem}, 20, pages: 1153-1411, Wiley-VCH, Weinheim, Germany, 2019 (misc)

mms

link (url) DOI [BibTex]

link (url) DOI [BibTex]

2012


no image
Methods, apparatuses, and systems for micromanipulation with adhesive fibrillar structures

Sitti, M., Mengüç, Y.

December 2012, US Patent App. 14/368,079 (misc)

pi

[BibTex]

2012



no image
Dry adhesive structures

Sitti, M., Murphy, M., Aksak, B.

December 2012, US Patent App. 13/533,386 (misc)

pi

[BibTex]

[BibTex]


no image
Methods of making dry adhesives

Sitti, M., Murphy, M., Aksak, B.

June 2012, US Patent 8,206,631 (misc)

pi

[BibTex]

[BibTex]


no image
Blind Retrospective Motion Correction of MR Images

Loktyushin, A., Nickisch, H., Pohmann, R., Schölkopf, B.

20th Annual Scientific Meeting ISMRM, May 2012 (poster)

Abstract
Patient motion in the scanner is one of the most challenging problems in MRI. We propose a new retrospective motion correction method for which no tracking devices or specialized sequences are required. We seek the motion parameters such that the image gradients in the spatial domain become sparse. We then use these parameters to invert the motion and recover the sharp image. In our experiments we acquired 2D TSE images and 3D FLASH/MPRAGE volumes of the human head. Major quality improvements are possible in the 2D case and substantial improvements in the 3D case.

ei

Web [BibTex]

Web [BibTex]


no image
Dry adhesives and methods for making dry adhesives

Sitti, M., Murphy, M., Aksak, B.

March 2012, US Patent App. 13/429,621 (misc)

pi

[BibTex]

[BibTex]


no image
Identifying endogenous rhythmic spatio-temporal patterns in micro-electrode array recordings

Besserve, M., Panagiotaropoulos, T., Crocker, B., Kapoor, V., Tolias, A., Panzeri, S., Logothetis, N.

9th annual Computational and Systems Neuroscience meeting (Cosyne), 2012 (poster)

ei

[BibTex]

[BibTex]


no image
Reconstruction using Gaussian mixture models

Joubert, P., Habeck, M.

2012 Gordon Research Conference on Three-Dimensional Electron Microscopy (3DEM), 2012 (poster)

ei

Web [BibTex]

Web [BibTex]


no image
Learning from Distributions via Support Measure Machines

Muandet, K., Fukumizu, K., Dinuzzo, F., Schölkopf, B.

26th Annual Conference on Neural Information Processing Systems (NIPS), 2012 (poster)

ei

PDF [BibTex]

PDF [BibTex]


no image
Juggling Increases Interhemispheric Brain Connectivity: A Visual and Quantitative dMRI Study.

Schultz, T., Gerber, P., Schmidt-Wilcke, T.

Vision, Modeling and Visualization (VMV), 2012 (poster)

ei

[BibTex]

[BibTex]


no image
The geometry and statistics of geometric trees

Feragen, A., Lo, P., de Bruijne, M., Nielsen, M., Lauze, F.

T{\"u}bIt day of bioinformatics, June, 2012 (poster)

ei

[BibTex]

[BibTex]


no image
Therapy monitoring of patients with chronic sclerodermic graft-versus-host-disease using PET/MRI

Sauter, A., Schmidt, H., Mantlik, F., Kolb, A., Federmann, B., Bethge, W., Reimold, M., Pfannenberg, C., Pichler, B., Horger, M.

2012 SNM Annual Meeting, 2012 (poster)

ei

Web [BibTex]

Web [BibTex]


no image
Centrality of the Mammalian Functional Brain Network

Besserve, M., Bartels, A., Murayama, Y., Logothetis, N.

42nd Annual Meeting of the Society for Neuroscience (Neuroscience), 2012 (poster)

ei

[BibTex]

[BibTex]


no image
Kernel Mean Embeddings of POMDPs

Nishiyama, Y., Boularias, A., Gretton, A., Fukumizu, K.

21st Machine Learning Summer School , 2012 (poster)

ei

[BibTex]

[BibTex]


no image
Semi-Supervised Domain Adaptation with Copulas

Lopez-Paz, D., Hernandez-Lobato, J., Schölkopf, B.

Neural Information Processing Systems (NIPS), 2012 (poster)

ei

PDF [BibTex]

PDF [BibTex]


no image
Evaluation of Whole-Body MR-Based Attenuation Correction in Bone and Soft Tissue Lesions

Bezrukov, I., Mantlik, F., Schmidt, H., Schwenzer, N., Brendle, C., Schölkopf, B., Pichler, B.

Nuclear Science Symposium and Medical Imaging Conference (NSS-MIC), 2012 (poster)

ei

[BibTex]

[BibTex]


no image
The PET Performance Measurements of A Next Generation Dedicated Small Animal PET/MR Scanner

Liu, C., Hossain, M., Bezrukov, I., Wehrl, H., Kolb, A., Judenhofer, M., Pichler, B.

World Molecular Imaging Congress (WMIC), 2012 (poster)

ei

[BibTex]

[BibTex]

2004


no image
S-cones contribute to flicker brightness in human vision

Wehrhahn, C., Hill, NJ., Dillenburger, B.

34(174.12), 34th Annual Meeting of the Society for Neuroscience (Neuroscience), October 2004 (poster)

Abstract
In the retina of primates three cone types sensitive to short, middle and long wavelengths of light convert photons into electrical signals. Many investigators have presented evidence that, in color normal observers, the signals of cones sensitive to short wavelengths of light (S-cones) do not contribute to the perception of brightness of a colored surface when this is alternated with an achromatic reference (flicker brightness). Other studies indicate that humans do use S-cone signals when performing this task. Common to all these studies is the small number of observers, whose performance data are reported. Considerable variability in the occurrence of cone types across observers has been found, but, to our knowledge, no cone counts exist from larger populations of humans. We reinvestigated how much the S-cones contribute to flicker brightness. 76 color normal observers were tested in a simple psychophysical procedure neutral to the cone type occurence (Teufel & Wehrhahn (2000), JOSA A 17: 994 - 1006). The data show that, in the majority of our observers, S-cones provide input with a negative sign - relative to L- and M-cone contribution - in the task in question. There is indeed considerable between-subject variability such that for 20 out of 76 observers the magnitude of this input does not differ significantly from 0. Finally, we argue that the sign of S-cone contribution to flicker brightness perception by an observer cannot be used to infer the relative sign their contributions to the neuronal signals carrying the information leading to the perception of flicker brightness. We conclude that studies which use only a small number of observers may easily fail to find significant evidence for the small but significant population tendency for the S-cones to contribute to flicker brightness. Our results confirm all earlier results and reconcile their contradictory interpretations.

ei

Web [BibTex]

2004


Web [BibTex]


no image
Human Classification Behaviour Revisited by Machine Learning

Graf, A., Wichmann, F., Bülthoff, H., Schölkopf, B.

7, pages: 134, (Editors: Bülthoff, H.H., H.A. Mallot, R. Ulrich and F.A. Wichmann), 7th T{\"u}bingen Perception Conference (TWK), Febuary 2004 (poster)

Abstract
We attempt to understand visual classication in humans using both psychophysical and machine learning techniques. Frontal views of human faces were used for a gender classication task. Human subjects classied the faces and their gender judgment, reaction time (RT) and condence rating (CR) were recorded for each face. RTs are longer for incorrect answers than for correct ones, high CRs are correlated with low classication errors and RTs decrease as the CRs increase. This results suggest that patterns difcult to classify need more computation by the brain than patterns easy to classify. Hyperplane learning algorithms such as Support Vector Machines (SVM), Relevance Vector Machines (RVM), Prototype learners (Prot) and K-means learners (Kmean) were used on the same classication task using the Principal Components of the texture and oweld representation of the faces. The classication performance of the learning algorithms was estimated using the face database with the true gender of the faces as labels, and also with the gender estimated by the subjects. Kmean yield a classication performance close to humans while SVM and RVM are much better. This surprising behaviour may be due to the fact that humans are trained on real faces during their lifetime while they were here tested on articial ones, while the algorithms were trained and tested on the same set of stimuli. We then correlated the human responses to the distance of the stimuli to the separating hyperplane (SH) of the learning algorithms. On the whole stimuli far from the SH are classied more accurately, faster and with higher condence than those near to the SH if we pool data across all our subjects and stimuli. We also nd three noteworthy results. First, SVMs and RVMs can learn to classify faces using the subjects' labels but perform much better when using the true labels. Second, correlating the average response of humans (classication error, RT or CR) with the distance to the SH on a face-by-face basis using Spearman's rank correlation coefcients shows that RVMs recreate human performance most closely in every respect. Third, the mean-of-class prototype, its popularity in neuroscience notwithstanding, is the least human-like classier in all cases examined.

ei

Web [BibTex]

Web [BibTex]


no image
m-Alternative-Forced-Choice: Improving the Efficiency of the Method of Constant Stimuli

Jäkel, F., Hill, J., Wichmann, F.

7, pages: 118, 7th T{\"u}bingen Perception Conference (TWK), February 2004 (poster)

Abstract
We explored several ways to improve the efficiency of measuring psychometric functions without resorting to adaptive procedures. a) The number m of alternatives in an m-alternative-forced-choice (m-AFC) task improves the efficiency of the method of constant stimuli. b) When alternatives are presented simultaneously on different positions on a screen rather than sequentially time can be saved and memory load for the subject can be reduced. c) A touch-screen can further help to make the experimental procedure more intuitive. We tested these ideas in the measurement of contrast sensitivity and compared them to results obtained by sequential presentation in two-interval-forced-choice (2-IFC). Qualitatively all methods (m-AFC and 2-IFC) recovered the characterictic shape of the contrast sensitivity function in three subjects. The m-AFC paradigm only took about 60% of the time of the 2-IFC task. We tried m=2,4,8 and found 4-AFC to give the best model fits and 2-AFC to have the least bias.

ei

Web [BibTex]

Web [BibTex]


no image
Efficient Approximations for Support Vector Classifiers

Kienzle, W., Franz, M.

7, pages: 68, 7th T{\"u}bingen Perception Conference (TWK), February 2004 (poster)

Abstract
In face detection, support vector machines (SVM) and neural networks (NN) have been shown to outperform most other classication methods. While both approaches are learning-based, there are distinct advantages and drawbacks to each method: NNs are difcult to design and train but can lead to very small and efcient classiers. In comparison, SVM model selection and training is rather straightforward, and, more importantly, guaranteed to converge to a globally optimal (in the sense of training errors) solution. Unfortunately, SVM classiers tend to have large representations which are inappropriate for time-critical image processing applications. In this work, we examine various existing and new methods for simplifying support vector decision rules. Our goal is to obtain efcient classiers (as with NNs) while keeping the numerical and statistical advantages of SVMs. For a given SVM solution, we compute a cascade of approximations with increasing complexities. Each classier is tuned so that the detection rate is near 100%. At run-time, the rst (simplest) detector is evaluated on the whole image. Then, any subsequent classier is applied only to those positions that have been classied as positive throughout all previous stages. The false positive rate at the end equals that of the last (i.e. most complex) detector. In contrast, since many image positions are discarded by lower-complexity classiers, the average computation time per patch decreases signicantly compared to the time needed for evaluating the highest-complexity classier alone.

ei

Web [BibTex]

Web [BibTex]