Header logo is


2000


Probabilistic detection and tracking of motion boundaries
Probabilistic detection and tracking of motion boundaries

Black, M. J., Fleet, D. J.

Int. J. of Computer Vision, 38(3):231-245, July 2000 (article)

Abstract
We propose a Bayesian framework for representing and recognizing local image motion in terms of two basic models: translational motion and motion boundaries. Motion boundaries are represented using a non-linear generative model that explicitly encodes the orientation of the boundary, the velocities on either side, the motion of the occluding edge over time, and the appearance/disappearance of pixels at the boundary. We represent the posterior probability distribution over the model parameters given the image data using discrete samples. This distribution is propagated over time using a particle filtering algorithm. To efficiently represent such a high-dimensional space we initialize samples using the responses of a low-level motion discontinuity detector. The formulation and computational model provide a general probabilistic framework for motion estimation with multiple, non-linear, models.

ps

pdf pdf from publisher Video [BibTex]

2000


pdf pdf from publisher Video [BibTex]


no image
A brachiating robot controller

Nakanishi, J., Fukuda, T., Koditschek, D. E.

IEEE Transactions on Robotics and Automation, 16(2):109-123, 2000, clmc (article)

Abstract
We report on our empirical studies of a new controller for a two-link brachiating robot. Motivated by the pendulum-like motion of an apeâ??s brachiation, we encode this task as the output of a â??target dynamical system.â? Numerical simulations indicate that the resulting controller solves a number of brachiation problems that we term the â??ladder,â? â??swing-up,â? and â??ropeâ? problems. Preliminary analysis provides some explanation for this success. The proposed controller is implemented on a physical system in our laboratory. The robot achieves behaviors including â??swing locomotionâ? and â??swing upâ? and is capable of continuous locomotion over several rungs of a ladder. We discuss a number of formal questions whose answers will be required to gain a full understanding of the strengths and weaknesses of this approach.

am

link (url) [BibTex]

link (url) [BibTex]


no image
Interaction of rhythmic and discrete pattern generators in single joint movements

Sternad, D., Dean, W. J., Schaal, S.

Human Movement Science, 19(4):627-665, 2000, clmc (article)

Abstract
The study investigates a single-joint movement task that combines a translatory and cyclic component with the objective to investigate the interaction of discrete and rhythmic movement elements. Participants performed an elbow movement in the horizontal plane, oscillating at a prescribed frequency around one target and shifting to a second target upon a trigger signal, without stopping the oscillation. Analyses focused on extracting the mutual influences of the rhythmic and the discrete component of the task. Major findings are: (1) The onset of the discrete movement was confined to a limited phase window in the rhythmic cycle. (2) Its duration was influenced by the period of oscillation. (3) The rhythmic oscillation was "perturbed" by the discrete movement as indicated by phase resetting. On the basis of these results we propose a model for the coordination of discrete and rhythmic actions (K. Matsuoka, Sustained oscillations generated by mutually inhibiting neurons with adaptations, Biological Cybernetics 52 (1985) 367-376; Mechanisms of frequency and pattern control in the neural rhythm generators, Biological Cybernetics 56 (1987) 345-353). For rhythmic movements an oscillatory pattern generator is developed following models of half-center oscillations (D. Bullock, S. Grossberg, The VITE model: a neural command circuit for generating arm and articulated trajectories, in: J.A.S. Kelso, A.J. Mandel, M. F. Shlesinger (Eds.), Dynamic Patterns in Complex Systems. World Scientific. Singapore. 1988. pp. 305-326). For discrete movements a point attractor dynamics is developed close to the VITE model For each joint degree of freedom both pattern generators co-exist but exert mutual inhibition onto each other. The suggested modeling framework provides a unified account for both discrete and rhythmic movements on the basis of neuronal circuitry. Simulation results demonstrated that the effects observed in human performance can be replicated using the two pattern generators with a mutually inhibiting coupling.

am

link (url) [BibTex]

link (url) [BibTex]


no image
Dynamics of a bouncing ball in human performance

Sternad, D., Duarte, M., Katsumata, H., Schaal, S.

Physical Review E, 63(011902):1-8, 2000, clmc (article)

Abstract
On the basis of a modified bouncing-ball model, we investigated whether human movements utilize principles of dynamic stability in their performance of a similar movement task. Stability analyses of the model provided predictions about conditions indicative of a dynamically stable period-one regime. In a series of experiments, human subjects bounced a ball rhythmically on a racket and displayed these conditions supporting that they attuned to and exploited the dynamic stability properties of the task.

am

link (url) [BibTex]

link (url) [BibTex]


Design and use of linear models for image motion analysis
Design and use of linear models for image motion analysis

Fleet, D. J., Black, M. J., Yacoob, Y., Jepson, A. D.

Int. J. of Computer Vision, 36(3):171-193, 2000 (article)

Abstract
Linear parameterized models of optical flow, particularly affine models, have become widespread in image motion analysis. The linear model coefficients are straightforward to estimate, and they provide reliable estimates of the optical flow of smooth surfaces. Here we explore the use of parameterized motion models that represent much more varied and complex motions. Our goals are threefold: to construct linear bases for complex motion phenomena; to estimate the coefficients of these linear models; and to recognize or classify image motions from the estimated coefficients. We consider two broad classes of motions: i) generic “motion features” such as motion discontinuities and moving bars; and ii) non-rigid, object-specific, motions such as the motion of human mouths. For motion features we construct a basis of steerable flow fields that approximate the motion features. For object-specific motions we construct basis flow fields from example motions using principal component analysis. In both cases, the model coefficients can be estimated directly from spatiotemporal image derivatives with a robust, multi-resolution scheme. Finally, we show how these model coefficients can be use to detect and recognize specific motions such as occlusion boundaries and facial expressions.

ps

pdf [BibTex]

pdf [BibTex]


Robustly estimating changes in image appearance
Robustly estimating changes in image appearance

Black, M. J., Fleet, D. J., Yacoob, Y.

Computer Vision and Image Understanding, 78(1):8-31, 2000 (article)

Abstract
We propose a generalized model of image “appearance change” in which brightness variation over time is represented as a probabilistic mixture of different causes. We define four generative models of appearance change due to (1) object or camera motion; (2) illumination phenomena; (3) specular reflections; and (4) “iconic changes” which are specific to the objects being viewed. These iconic changes include complex occlusion events and changes in the material properties of the objects. We develop a robust statistical framework for recovering these appearance changes in image sequences. This approach generalizes previous work on optical flow to provide a richer description of image events and more reliable estimates of image motion in the presence of shadows and specular reflections.

ps

pdf pdf from publisher DOI [BibTex]

pdf pdf from publisher DOI [BibTex]

1999


Parameterized modeling and recognition of activities
Parameterized modeling and recognition of activities

Yacoob, Y., Black, M. J.

Computer Vision and Image Understanding, 73(2):232-247, 1999 (article)

Abstract
In this paper we consider a class of human activities—atomic activities—which can be represented as a set of measurements over a finite temporal window (e.g., the motion of human body parts during a walking cycle) and which has a relatively small space of variations in performance. A new approach for modeling and recognition of atomic activities that employs principal component analysis and analytical global transformations is proposed. The modeling of sets of exemplar instances of activities that are similar in duration and involve similar body part motions is achieved by parameterizing their representation using principal component analysis. The recognition of variants of modeled activities is achieved by searching the space of admissible parameterized transformations that these activities can undergo. This formulation iteratively refines the recognition of the class to which the observed activity belongs and the transformation parameters that relate it to the model in its class. We provide several experiments on recognition of articulated and deformable human motions from image motion parameters.

ps

pdf pdf from publisher DOI [BibTex]

1999


pdf pdf from publisher DOI [BibTex]


no image
Is imitation learning the route to humanoid robots?

Schaal, S.

Trends in Cognitive Sciences, 3(6):233-242, 1999, clmc (article)

Abstract
This review will focus on two recent developments in artificial intelligence and neural computation: learning from imitation and the development of humanoid robots. It will be postulated that the study of imitation learning offers a promising route to gain new insights into mechanisms of perceptual motor control that could ultimately lead to the creation of autonomous humanoid robots. This hope is justified because imitation learning channels research efforts towards three important issues: efficient motor learning, the connection between action and perception, and modular motor control in form of movement primitives. In order to make these points, first, a brief review of imitation learning will be given from the view of psychology and neuroscience. In these fields, representations and functional connections between action and perception have been explored that contribute to the understanding of motor acts of other beings. The recent discovery that some areas in the primate brain are active during both movement perception and execution provided a first idea of the possible neural basis of imitation. Secondly, computational approaches to imitation learning will be described, initially from the perspective of traditional AI and robotics, and then with a focus on neural network models and statistical learning research. Parallels and differences between biological and computational approaches to imitation will be highlighted. The review will end with an overview of current projects that actually employ imitation learning for humanoid robots.

am

link (url) [BibTex]

link (url) [BibTex]


no image
Segmentation of endpoint trajectories does not imply segmented control

Sternad, D., Schaal, D.

Experimental Brain Research, 124(1):118-136, 1999, clmc (article)

Abstract
While it is generally assumed that complex movements consist of a sequence of simpler units, the quest to define these units of action, or movement primitives, still remains an open question. In this context, two hypotheses of movement segmentation of endpoint trajectories in 3D human drawing movements are re-examined: (1) the stroke-based segmentation hypothesis based on the results that the proportionality coefficient of the 2/3 power law changes discontinuously with each new â??strokeâ?, and (2) the segmentation hypothesis inferred from the observation of piecewise planar endpoint trajectories of 3D drawing movements. In two experiments human subjects performed a set of elliptical and figure-8 patterns of different sizes and orientations using their whole arm in 3D. The kinematic characteristics of the endpoint trajectories and the seven joint angles of the arm were analyzed. While the endpoint trajectories produced similar segmentation features as reported in the literature, analyses of the joint angles show no obvious segmentation but rather continuous oscillatory patterns. By approximating the joint angle data of human subjects with sinusoidal trajectories, and by implementing this model on a 7-degree-of-freedom anthropomorphic robot arm, it is shown that such a continuous movement strategy can produce exactly the same features as observed by the above segmentation hypotheses. The origin of this apparent segmentation of endpoint trajectories is traced back to the nonlinear transformations of the forward kinematics of human arms. The presented results demonstrate that principles of discrete movement generation may not be reconciled with those of rhythmic movement as easily as has been previously suggested, while the generalization of nonlinear pattern generators to arm movements can offer an interesting alternative to approach the question of units of action.

am

link (url) [BibTex]

link (url) [BibTex]