Wearable sensing and feedback devices are becoming increasingly ubiquitous for measuring human movement in research laboratories, medical clinics, and in consumer goods. Advances in computation and miniaturization have enabled sensing for gait assessment; these technologies are then used in interventions to provide feedback that facilitates changes in gait or enhances sensory capabilities. This talk will focus on vibration as the primary method of providing feedback. I will discuss the use of vibrotactile arrays to communicate plantar foot pressure in users of lower-limb prosthetics, as a synthetic form of sensory feedback. Wearable vibrating units can also be used as a cue to retrain gait, and I will describe my preliminary work in gait retraining as a conservative treatment for knee osteoarthritis. This talk will cover the development and evaluation of these haptic devices and establish their impact within the greater context of clinical biomechanics.
During manipulation, humans adjust the amount of force applied to an object depending on friction: they exert a stronger grip for slippery surfaces and a looser grip for sticky surfaces. However, the neural mechanisms signaling friction remain unclear. To fill this gap, we recorded the response of human tactile afferent during the onset of slip against flat surfaces of different frictions. We observed that some afferents responded to partial slip events occurring during transition from a stuck to a slipping contact, and potentially signaling the impending slip.
Clearly explaining a rationale for a classification decision to an end-user can be as important as the decision itself. Existing approaches for deep visual recognition are generally opaque and do not output any justification text; contemporary vision-language models can describe image content but fail to take into account class-discriminative image properties which justify visual predictions. In this talk, I will present my past and current work on Zero-Shot Learning, Vision and Language for Generative Modeling and Explainable Machine Learning where we show (1) how to generalize image classification models to cases when no labelled visual training data is available, (2) how to generate images and image features using detailed visual descriptions, and (3) how our models focus on discriminating properties of the visible object, jointly predict a class label, explain why/not the predicted label is chosen for the image.
Feedback based automatic control has been a key enabling technology for many technological advances over the past 80 years. New application domains, like autonomous cars driving on automated highways, energy distribution via smart grids, life in smart cities or the new production paradigm Industry 4.0 do, however, require a new type of cybernetic systems and control theory that goes beyond some of the classical ideas. Starting from the concept of feedback and its significance in nature and technology, we will present in this talk some new developments and challenges in connection to the control of today's and tomorrow’s intelligent systems.
I will survey our work on tracking and measurement, waypoints on the path to activity recognition and understanding, in sports video, highlighting some of our recent work on rectification and player tracking, not just in hockey but more recently in basketball, where we have addressed player identification both in a fully supervised and semi-supervised manner.
Methods for visual recognition have made dramatic strides in recent years on various online benchmarks, but performance in the real world still often falters. Classic gradient-histogram models make overly simplistic assumptions regarding image appearance statistics, both locally and globally. Recent progress suggests that new learning-based representations can improve recognition by devices that are embedded in a physical world.
I'll review new methods for domain adaptation which capture the visual domain shift between environments, and improve recognition of objects in specific places when trained from generic online sources. I'll discuss methods for cross-modal semi-supervised learning, which can leverage additional unlabeled modalities in a test environment.
Finally as time permits I'll present recent results learning hierarchical local image representations based on recursive probabilistic topic models, on learning strong object color models from sets of uncalibrated views using a new multi-view color constancy paradigm, and/or on recent results on monocular estimation of grasp affordances.
In the first part of the talk, I will describe methods that learn a single family of detectors for object classes that exhibit large within-class variation. One common solution is to use a divide-and-conquer strategy, where the space of possible within-class variations is partitioned, and different detectors are trained for different partitions.
However, these discrete partitions tend to be arbitrary in continuous spaces, and the classifiers have limited power when there are too few training samples in each subclass. To address this shortcoming, explicit feature sharing has been proposed, but it also makes training more expensive. We show that foreground-background classification (detection) and within-class classification of the foreground class (pose estimation) can be jointly solved in a multiplicative form of two kernel functions. One kernel measures similarity for foreground-background classification. The other kernel accounts for latent factors that control within-class variation and implicitly enables feature sharing among foreground training samples. The multiplicative kernel formulation enables feature sharing implicitly; the solution for the optimal sharing is a byproduct of SVM learning.
The resulting detector family is tuned to specific variations in the foreground. The effectiveness of this framework is demonstrated in experiments that involve detection, tracking, and pose estimation of human hands, faces, and vehicles in video.
Beginning with a seminal paper of Diaconis (1988), the aim of so-called "probabilistic numerics" is to compute probabilistic solutions to deterministic problems arising in numerical analysis by casting them as statistical inference problems. For example, numerical integration of a deterministic function can be seen as the integration of an unknown/random function, with evaluations of the integrand at the integration nodes proving partial information about the integrand. Advantages offered by this viewpoint include: access to the Bayesian representation of prior and posterior uncertainties; better propagation of uncertainty through hierarchical systems than simple worst-case error bounds; and appropriate accounting for numerical truncation and round-off error in inverse problems, so that the replicability of deterministic simulations is not confused with their accuracy, thereby yielding an inappropriately concentrated Bayesian posterior. This talk will describe recent work on probabilistic numerical solvers for ordinary and partial differential equations, including their theoretical construction, convergence rates, and applications to forward and inverse problems. Joint work with Andrew Stuart (Warwick).
Organizers: Philipp Hennig