Header logo is


2008


no image
Learning to control in operational space

Peters, J., Schaal, S.

International Journal of Robotics Research, 27, pages: 197-212, 2008, clmc (article)

Abstract
One of the most general frameworks for phrasing control problems for complex, redundant robots is operational space control. However, while this framework is of essential importance for robotics and well-understood from an analytical point of view, it can be prohibitively hard to achieve accurate control in face of modeling errors, which are inevitable in com- plex robots, e.g., humanoid robots. In this paper, we suggest a learning approach for opertional space control as a direct inverse model learning problem. A first important insight for this paper is that a physically cor- rect solution to the inverse problem with redundant degrees-of-freedom does exist when learning of the inverse map is performed in a suitable piecewise linear way. The second crucial component for our work is based on the insight that many operational space controllers can be understood in terms of a constrained optimal control problem. The cost function as- sociated with this optimal control problem allows us to formulate a learn- ing algorithm that automatically synthesizes a globally consistent desired resolution of redundancy while learning the operational space controller. From the machine learning point of view, this learning problem corre- sponds to a reinforcement learning problem that maximizes an immediate reward. We employ an expectation-maximization policy search algorithm in order to solve this problem. Evaluations on a three degrees of freedom robot arm are used to illustrate the suggested approach. The applica- tion to a physically realistic simulator of the anthropomorphic SARCOS Master arm demonstrates feasibility for complex high degree-of-freedom robots. We also show that the proposed method works in the setting of learning resolved motion rate control on real, physical Mitsubishi PA-10 medical robotics arm.

am ei

link (url) DOI [BibTex]

2008


link (url) DOI [BibTex]


no image
ENHANCED ADHESION OF PDMS SURFACES FUNCTIONALIZED BY POLY (n-BUTYL ACRYLATE) BRUSHES INSPIRED BY GECKO FOOT HAIRS

Nese, A., Lee, H., Dong, H., Aksak, B., Cusick, B., Kowalewski, T., Matyjaszewski, K., Sitti, M.

Polymer Preprints, 49(2):107, 2008 (article)

pi

[BibTex]

[BibTex]


no image
Design and development of the lifting and propulsion mechanism for a biologically inspired water runner robot

Floyd, S., Sitti, M.

IEEE transactions on robotics, 24(3):698-709, IEEE, 2008 (article)

pi

[BibTex]

[BibTex]


no image
Control of Cell Behavior by Aligned Micro/Nanofibrous Biomaterial Scaffolds Fabricated by Spinneret-Based Tunable Engineered Parameters (STEP) Technique

Nain, A. S., Phillippi, J. A., Sitti, M., MacKrell, J., Campbell, P. G., Amon, C.

Small, 4(8):1153-1159, Wiley Online Library, 2008 (article)

pi

[BibTex]

[BibTex]


no image
Simulation and analysis of a passive pitch reversal flapping wing mechanism for an aerial robotic platform

Arabagi, V., Sitti, M.

In Intelligent Robots and Systems, 2008. IROS 2008. IEEE/RSJ International Conference on, pages: 1260-1265, 2008 (inproceedings)

pi

Project Page [BibTex]

Project Page [BibTex]


no image
Biologically Inspired Polymer Micro-Patterned Adhesives

Cheung, E., Sitti, M.

EDGEWOOD CHEMICAL BIOLOGICAL CENTER ABERDEEN PROVING GROUND MD, 2008 (techreport)

pi

[BibTex]

[BibTex]


no image
Adaptation to a sub-optimal desired trajectory

M. Mistry, E. A. G. L. T. Y. S. S. M. K.

Advances in Computational Motor Control VII, Symposium at the Society for Neuroscience Meeting, Washington DC, 2008, 2008, clmc (article)

am

PDF [BibTex]

PDF [BibTex]


no image
Human movement generation based on convergent flow fields: A computational model and a behavioral experiment

Hoffmann, H., Schaal, S.

In Advances in Computational Motor Control VII, Symposium at the Society for Neuroscience Meeting, Washington DC, 2008, 2008, clmc (inproceedings)

am

link (url) [BibTex]

link (url) [BibTex]


no image
Rolling and spinning friction characterization of fine particles using lateral force microscopy based contact pushing

Sümer, B., Sitti, M.

Journal of Adhesion Science and Technology, 22(5-6):481-506, Taylor & Francis Group, 2008 (article)

pi

[BibTex]

[BibTex]


no image
Fabrication and Characterization of Biologically Inspired Mushroom-Shaped Elastomer Microfiber Arrays

Kim, S., Sitti, M.

In ASME 2008 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, pages: 839-847, 2008 (inproceedings)

pi

Project Page [BibTex]

Project Page [BibTex]


no image
Gecko inspired micro-fibrillar adhesives for wall climbing robots on micro/nanoscale rough surfaces

Aksak, B., Murphy, M. P., Sitti, M.

In Robotics and Automation, 2008. ICRA 2008. IEEE International Conference on, pages: 3058-3063, 2008 (inproceedings)

pi

Project Page [BibTex]

Project Page [BibTex]


no image
Modeling the soft backing layer thickness effect on adhesion of elastic microfiber arrays

Long, R., Hui, C., Kim, S., Sitti, M.

Journal of Applied Physics, 104(4):044301, AIP, 2008 (article)

pi

Project Page [BibTex]

Project Page [BibTex]


no image
Cross-talk compensation in atomic force microscopy

Onal, C. D., Sümer, B., Sitti, M.

Review of scientific instruments, 79(10):103706, AIP, 2008 (article)

pi

[BibTex]

[BibTex]


no image
Miniature Mobile Robots Down to Micron Scale

Sitti, M.

In Micro-NanoMechatronics and Human Science, 2008. MHS 2008. International Symposium on, pages: 525-525, 2008 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Operational space control: A theoretical and emprical comparison

Nakanishi, J., Cory, R., Mistry, M., Peters, J., Schaal, S.

International Journal of Robotics Research, 27(6):737-757, 2008, clmc (article)

Abstract
Dexterous manipulation with a highly redundant movement system is one of the hallmarks of hu- man motor skills. From numerous behavioral studies, there is strong evidence that humans employ compliant task space control, i.e., they focus control only on task variables while keeping redundant degrees-of-freedom as compliant as possible. This strategy is robust towards unknown disturbances and simultaneously safe for the operator and the environment. The theory of operational space con- trol in robotics aims to achieve similar performance properties. However, despite various compelling theoretical lines of research, advanced operational space control is hardly found in actual robotics imple- mentations, in particular new kinds of robots like humanoids and service robots, which would strongly profit from compliant dexterous manipulation. To analyze the pros and cons of different approaches to operational space control, this paper focuses on a theoretical and empirical evaluation of different methods that have been suggested in the literature, but also some new variants of operational space controllers. We address formulations at the velocity, acceleration and force levels. First, we formulate all controllers in a common notational framework, including quaternion-based orientation control, and discuss some of their theoretical properties. Second, we present experimental comparisons of these approaches on a seven-degree-of-freedom anthropomorphic robot arm with several benchmark tasks. As an aside, we also introduce a novel parameter estimation algorithm for rigid body dynamics, which ensures physical consistency, as this issue was crucial for our successful robot implementations. Our extensive empirical results demonstrate that one of the simplified acceleration-based approaches can be advantageous in terms of task performance, ease of parameter tuning, and general robustness and compliance in face of inevitable modeling errors.

am

link (url) [BibTex]

link (url) [BibTex]


no image
Movement reproduction and obstacle avoidance with dynamic movement primitives and potential fields

Park, D., Hoffmann, H., Pastor, P., Schaal, S.

In IEEE International Conference on Humanoid Robots, 2008., 2008, clmc (inproceedings)

am

PDF [BibTex]

PDF [BibTex]


no image
The dual role of uncertainty in force field learning

Mistry, M., Theodorou, E., Hoffmann, H., Schaal, S.

In Abstracts of the Eighteenth Annual Meeting of Neural Control of Movement (NCM), Naples, Florida, April 29-May 4, 2008, clmc (inproceedings)

Abstract
Force field experiments have been a successful paradigm for studying the principles of planning, execution, and learning in human arm movements. Subjects have been shown to cope with the disturbances generated by force fields by learning internal models of the underlying dynamics to predict disturbance effects or by increasing arm impedance (via co-contraction) if a predictive approach becomes infeasible. Several studies have addressed the issue uncertainty in force field learning. Scheidt et al. demonstrated that subjects exposed to a viscous force field of fixed structure but varying strength (randomly changing from trial to trial), learn to adapt to the mean disturbance, regardless of the statistical distribution. Takahashi et al. additionally show a decrease in strength of after-effects after learning in the randomly varying environment. Thus they suggest that the nervous system adopts a dual strategy: learning an internal model of the mean of the random environment, while simultaneously increasing arm impedance to minimize the consequence of errors. In this study, we examine what role variance plays in the learning of uncertain force fields. We use a 7 degree-of-freedom exoskeleton robot as a manipulandum (Sarcos Master Arm, Sarcos, Inc.), and apply a 3D viscous force field of fixed structure and strength randomly selected from trial to trial. Additionally, in separate blocks of trials, we alter the variance of the randomly selected strength multiplier (while keeping a constant mean). In each block, after sufficient learning has occurred, we apply catch trials with no force field and measure the strength of after-effects. As expected in higher variance cases, results show increasingly smaller levels of after-effects as the variance is increased, thus implying subjects choose the robust strategy of increasing arm impedance to cope with higher levels of uncertainty. Interestingly, however, subjects show an increase in after-effect strength with a small amount of variance as compared to the deterministic (zero variance) case. This result implies that a small amount of variability aides in internal model formation, presumably a consequence of the additional amount of exploration conducted in the workspace of the task.

am

[BibTex]

[BibTex]


no image
Dynamic movement primitives for movement generation motivated by convergent force fields in frog

Hoffmann, H., Pastor, P., Schaal, S.

In Adaptive Motion of Animals and Machines (AMAM), 2008, clmc (inproceedings)

am

PDF [BibTex]

PDF [BibTex]


no image
Adhesion of biologically inspired oil-coated polymer micropillars

Cheung, E., Sitti, M.

Journal of Adhesion Science and Technology, 22(5-6):569-589, Taylor & Francis Group, 2008 (article)

pi

[BibTex]

[BibTex]


no image
Polymeric Micro/Nanofiber Manufacturing and Mechanical Characterization

Nain, A. S., Sitti, M., Amon, C.

In ASME 2008 International Mechanical Engineering Congress and Exposition, pages: 295-303, 2008 (inproceedings)

pi

[BibTex]

[BibTex]


no image
An untethered magnetically actuated micro-robot capable of motion on arbitrary surfaces

Floyd, S., Pawashe, C., Sitti, M.

In Robotics and Automation, 2008. ICRA 2008. IEEE International Conference on, pages: 419-424, 2008 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Fabrication of bio-inspired elastomer nanofiber arrays with spatulate tips using notching effect

Kim, S., Sitti, M., Jang, J., Thomas, E. L.

In Nanotechnology, 2008. NANO’08. 8th IEEE Conference on, pages: 780-782, 2008 (inproceedings)

pi

[BibTex]

[BibTex]


no image
A motorized anchoring mechanism for a tethered capsule robot using fibrillar adhesives for interventions in the esophagus

Glass, P., Cheung, E., Wang, H., Appasamy, R., Sitti, M.

In Biomedical Robotics and Biomechatronics, 2008. BioRob 2008. 2nd IEEE RAS & EMBS International Conference on, pages: 758-764, 2008 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Vision-based feedback strategy for controlled pushing of microparticles

Lynch, N. A., Onal, C. D., Schuster, E., Sitti, M.

Journal of Micro-Nano Mechatronics, 4(1-2):73-83, Springer-Verlag, 2008 (article)

pi

[BibTex]

[BibTex]


no image
Effect of quantity and configuration of attached bacteria on bacterial propulsion of microbeads

Behkam, B., Sitti, M.

Applied Physics Letters, 93(22):223901, AIP, 2008 (article)

pi

[BibTex]

[BibTex]


no image
Efficient inverse kinematics algorithms for highdimensional movement systems

Tevatia, G., Schaal, S.

CLMC Technical Report: TR-CLMC-2008-1, 2008, clmc (techreport)

Abstract
Real-time control of the endeffector of a humanoid robot in external coordinates requires computationally efficient solutions of the inverse kinematics problem. In this context, this paper investigates methods of resolved motion rate control (RMRC) that employ optimization criteria to resolve kinematic redundancies. In particular we focus on two established techniques, the pseudo inverse with explicit optimization and the extended Jacobian method. We prove that the extended Jacobian method includes pseudo-inverse methods as a special solution. In terms of computational complexity, however, pseudo-inverse and extended Jacobian differ significantly in favor of pseudo-inverse methods. Employing numerical estimation techniques, we introduce a computationally efficient version of the extended Jacobian with performance comparable to the original version. Our results are illustrated in simulation studies with a multiple degree-offreedom robot, and were evaluated on an actual 30 degree-of-freedom full-body humanoid robot.

am

link (url) [BibTex]

link (url) [BibTex]


no image
Behavioral experiments on reinforcement learning in human motor control

Hoffmann, H., Theodorou, E., Schaal, S.

In Abstracts of the Eighteenth Annual Meeting of Neural Control of Movement (NCM), Naples, Florida, April 29-May 4, 2008, clmc (inproceedings)

Abstract
Reinforcement learning (RL) - learning solely based on reward or cost feedback - is widespread in robotics control and has been also suggested as computational model for human motor control. In human motor control, however, hardly any experiment studied reinforcement learning. Here, we study learning based on visual cost feedback in a reaching task and did three experiments: (1) to establish a simple enough experiment for RL, (2) to study spatial localization of RL, and (3) to study the dependence of RL on the cost function. In experiment (1), subjects sit in front of a drawing tablet and look at a screen onto which the drawing pen's position is projected. Beginning from a start point, their task is to move with the pen through a target point presented on screen. Visual feedback about the pen's position is given only before movement onset. At the end of a movement, subjects get visual feedback only about the cost of this trial. We choose as cost the squared distance between target and virtual pen position at the target line. Above a threshold value, the cost was fixed at this value. In the mapping of the pen's position onto the screen, we added a bias (unknown to subject) and Gaussian noise. As result, subjects could learn the bias, and thus, showed reinforcement learning. In experiment (2), we randomly altered the target position between three different locations (three different directions from start point: -45, 0, 45). For each direction, we chose a different bias. As result, subjects learned all three bias values simultaneously. Thus, RL can be spatially localized. In experiment (3), we varied the sensitivity of the cost function by multiplying the squared distance with a constant value C, while keeping the same cut-off threshold. As in experiment (2), we had three target locations. We assigned to each location a different C value (this assignment was randomized between subjects). Since subjects learned the three locations simultaneously, we could directly compare the effect of the different cost functions. As result, we found an optimal C value; if C was too small (insensitive cost), learning was slow; if C was too large (narrow cost valley), the exploration time was longer and learning delayed. Thus, reinforcement learning in human motor control appears to be sen

am

[BibTex]

[BibTex]


no image
Movement generation by learning from demonstration and generalization to new targets

Pastor, P., Hoffmann, H., Schaal, S.

In Adaptive Motion of Animals and Machines (AMAM), 2008, clmc (inproceedings)

am

PDF [BibTex]

PDF [BibTex]


no image
Combining dynamic movement primitives and potential fields for online obstacle avoidance

Park, D., Hoffmann, H., Schaal, S.

In Adaptive Motion of Animals and Machines (AMAM), Cleveland, Ohio, 2008, 2008, clmc (inproceedings)

am

link (url) [BibTex]

link (url) [BibTex]


no image
A library for locally weighted projection regression

Klanke, S., Vijayakumar, S., Schaal, S.

Journal of Machine Learning Research, 9, pages: 623-626, 2008, clmc (article)

Abstract
In this paper we introduce an improved implementation of locally weighted projection regression (LWPR), a supervised learning algorithm that is capable of handling high-dimensional input data. As the key features, our code supports multi-threading, is available for multiple platforms, and provides wrappers for several programming languages.

am

link (url) [BibTex]

link (url) [BibTex]


no image
Fabrication of Single and Multi-Layer Fibrous Biomaterial Scaffolds for Tissue Engineering

Nain, A. S., Miller, E., Sitti, M., Campbell, P., Amon, C.

In ASME 2008 International Mechanical Engineering Congress and Exposition, pages: 231-238, 2008 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Performance of different foot designs for a water running robot

Floyd, S., Adilak, S., Ramirez, S., Rogman, R., Sitti, M.

In Robotics and Automation, 2008. ICRA 2008. IEEE International Conference on, pages: 244-250, 2008 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Dynamic modeling of a basilisk lizard inspired quadruped robot running on water

Park, H. S., Floyd, S., Sitti, M.

In Intelligent Robots and Systems, 2008. IROS 2008. IEEE/RSJ International Conference on, pages: 3101-3107, 2008 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Bacterial propulsion of chemically patterned micro-cylinders

Behkam, B., Sitti, M.

In Biomedical Robotics and Biomechatronics, 2008. BioRob 2008. 2nd IEEE RAS & EMBS International Conference on, pages: 753-757, 2008 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Preface to the Journal of Micro-Nano Mechatronics

Dario, P., Fukuda, T., Sitti, M.

Journal of Micro-Nano Mechatronics, 4(1-2):1-1, Springer-Verlag, 2008 (article)

pi

[BibTex]

[BibTex]


no image
A legged anchoring mechanism for capsule endoscopes using micropatterned adhesives

Glass, P., Cheung, E., Sitti, M.

IEEE Transactions on Biomedical Engineering, 55(12):2759-2767, IEEE, 2008 (article)

pi

Project Page [BibTex]

Project Page [BibTex]


no image
Computational model for movement learning under uncertain cost

Theodorou, E., Hoffmann, H., Mistry, M., Schaal, S.

In Abstracts of the Society of Neuroscience Meeting (SFN 2008), Washington, DC 2008, 2008, clmc (inproceedings)

Abstract
Stochastic optimal control is a framework for computing control commands that lead to an optimal behavior under a given cost. Despite the long history of optimal control in engineering, it has been only recently applied to describe human motion. So far, stochastic optimal control has been mainly used in tasks that are already learned, such as reaching to a target. For learning, however, there are only few cases where optimal control has been applied. The main assumptions of stochastic optimal control that restrict its application to tasks after learning are the a priori knowledge of (1) a quadratic cost function (2) a state space model that captures the kinematics and/or dynamics of musculoskeletal system and (3) a measurement equation that models the proprioceptive and/or exteroceptive feedback. Under these assumptions, a sequence of control gains is computed that is optimal with respect to the prespecified cost function. In our work, we relax the assumption of the a priori known cost function and provide a computational framework for modeling tasks that involve learning. Typically, a cost function consists of two parts: one part that models the task constraints, like squared distance to goal at movement endpoint, and one part that integrates over the squared control commands. In learning a task, the first part of this cost function will be adapted. We use an expectation-maximization scheme for learning: the expectation step optimizes the task constraints through gradient descent of a reward function and the maximizing step optimizes the control commands. Our computational model is tested and compared with data given from a behavioral experiment. In this experiment, subjects sit in front of a drawing tablet and look at a screen onto which the drawing-pen's position is projected. Beginning from a start point, their task is to move with the pen through a target point presented on screen. Visual feedback about the pen's position is given only before movement onset. At the end of a movement, subjects get visual feedback only about the cost of this trial. In the mapping of the pen's position onto the screen, we added a bias (unknown to subject) and Gaussian noise. Therefore the cost is a function of this bias. The subjects were asked to reach to the target and minimize this cost over trials. In this behavioral experiment, subjects could learn the bias and thus showed reinforcement learning. With our computational model, we could model the learning process over trials. Particularly, the dependence on parameters of the reward function (Gaussian width) and the modulation of movement variance over time were similar in experiment and model.

am

[BibTex]

[BibTex]


no image
Optimization strategies in human reinforcement learning

Hoffmann, H., Theodorou, E., Schaal, S.

Advances in Computational Motor Control VII, Symposium at the Society for Neuroscience Meeting, Washington DC, 2008, 2008, clmc (article)

am

PDF [BibTex]

PDF [BibTex]


no image
A Bayesian approach to empirical local linearizations for robotics

Ting, J., D’Souza, A., Vijayakumar, S., Schaal, S.

In International Conference on Robotics and Automation (ICRA2008), Pasadena, CA, USA, May 19-23, 2008, 2008, clmc (inproceedings)

Abstract
Local linearizations are ubiquitous in the control of robotic systems. Analytical methods, if available, can be used to obtain the linearization, but in complex robotics systems where the the dynamics and kinematics are often not faithfully obtainable, empirical linearization may be preferable. In this case, it is important to only use data for the local linearization that lies within a ``reasonable'' linear regime of the system, which can be defined from the Hessian at the point of the linearization -- a quantity that is not available without an analytical model. We introduce a Bayesian approach to solve statistically what constitutes a ``reasonable'' local regime. We approach this problem in the context local linear regression. In contrast to previous locally linear methods, we avoid cross-validation or complex statistical hypothesis testing techniques to find the appropriate local regime. Instead, we treat the parameters of the local regime probabilistically and use approximate Bayesian inference for their estimation. This approach results in an analytical set of iterative update equations that are easily implemented on real robotics systems for real-time applications. As in other locally weighted regressions, our algorithm also lends itself to complete nonlinear function approximation for learning empirical internal models. We sketch the derivation of our Bayesian method and provide evaluations on synthetic data and actual robot data where the analytical linearization was known.

am

link (url) [BibTex]

link (url) [BibTex]


no image
Do humans plan continuous trajectories in kinematic coordinates?

Hoffmann, H., Schaal, S.

In Abstracts of the Society of Neuroscience Meeting (SFN 2008), Washington, DC 2008, 2008, clmc (inproceedings)

Abstract
The planning and execution of human arm movements is still unresolved. An ongoing controversy is whether we plan a movement in kinematic coordinates and convert these coordinates with an inverse internal model into motor commands (like muscle activation) or whether we combine a few muscle synergies or equilibrium points to move a hand, e.g., between two targets. The first hypothesis implies that a planner produces a desired end-effector position for all time points; the second relies on the dynamics of the muscular-skeletal system for a given control command to produce a continuous end-effector trajectory. To distinguish between these two possibilities, we use a visuomotor adaptation experiment. Subjects moved a pen on a graphics tablet and observed the pen's mapped position onto a screen (subjects quickly adapted to this mapping). The task was to move a cursor between two points in a given time window. In the adaptation test, we manipulated the velocity profile of the cursor feedback such that the shape of the trajectories remained unchanged (for straight paths). If humans would use a kinematic plan and map at each time the desired end-effector position onto control commands, subjects should adapt to the above manipulation. In a similar experiment, Wolpert et al (1995) showed adaptation to changes in the curvature of trajectories. This result, however, cannot rule out a shift of an equilibrium point or an additional synergy activation between start and end point of a movement. In our experiment, subjects did two sessions, one control without and one with velocity-profile manipulation. To skew the velocity profile of the cursor trajectory, we added to the current velocity, v, the function 0.8*v*cos(pi + pi*x), where x is the projection of the cursor position onto the start-goal line divided by the distance start to goal (x=0 at the start point). As result, subjects did not adapt to this manipulation: for all subjects, the true hand motion was not significantly modified in a direction consistent with adaptation, despite that the visually presented motion differed significantly from the control motion. One may still argue that this difference in motion was insufficient to be processed visually. Thus, as a control experiment, we replayed control and modified motions to the subjects and asked which of the two motions appeared 'more natural'. Subjects chose the unperturbed motion as more natural significantly better than chance. In summary, for a visuomotor transformation task, the hypothesis of a planned continuous end-effector trajectory predicts adaptation to a modified velocity profile. The current experiment found no adaptation under such transformation.

am

[BibTex]

[BibTex]


no image
Design and Numerical Modeling of an On-Board Chemical Release Module for Motion Control of Bacteria-Propelled Swimming Micro-Robots

Behkam, B., Nain, A. S., Amon, C. H., Sitti, M.

In ASME 2008 International Mechanical Engineering Congress and Exposition, pages: 239-244, 2008 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Dynamic modeling of stick slip motion in an untethered magnetic microrobot

Pawashe, C., Floyd, S., Sitti, M.

Proceedings of Robotics: Science and Systems IV, Zurich, Switzerland, 2008 (article)

pi

[BibTex]

[BibTex]


no image
Investigation of Calcium Mechanotransduction by Quasi 3-D Microfiber Mechanical Stimulation of Cells

Ruder, W. C., Pratt, E. D., Sitti, M., LeDuc, P. R., Antaki, J. F.

In ASME 2008 Summer Bioengineering Conference, pages: 1049-1050, 2008 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Beanbag robotics: Robotic swarms with 1-dof units

Kriesel, D. M., Cheung, E., Sitti, M., Lipson, H.

In International Conference on Ant Colony Optimization and Swarm Intelligence, pages: 267-274, 2008 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Particle image velocimetry and thrust of flagellar micro propulsion systems

Danis, U., Sitti, M., Pekkan, K.

In APS Division of Fluid Dynamics Meeting Abstracts, 1, 2008 (inproceedings)

pi

[BibTex]

[BibTex]

2007


no image
Towards Machine Learning of Motor Skills

Peters, J., Schaal, S., Schölkopf, B.

In Proceedings of Autonome Mobile Systeme (AMS), pages: 138-144, (Editors: K Berns and T Luksch), 2007, clmc (inproceedings)

Abstract
Autonomous robots that can adapt to novel situations has been a long standing vision of robotics, artificial intelligence, and cognitive sciences. Early approaches to this goal during the heydays of artificial intelligence research in the late 1980s, however, made it clear that an approach purely based on reasoning or human insights would not be able to model all the perceptuomotor tasks that a robot should fulfill. Instead, new hope was put in the growing wake of machine learning that promised fully adaptive control algorithms which learn both by observation and trial-and-error. However, to date, learning techniques have yet to fulfill this promise as only few methods manage to scale into the high-dimensional domains of manipulator robotics, or even the new upcoming trend of humanoid robotics, and usually scaling was only achieved in precisely pre-structured domains. In this paper, we investigate the ingredients for a general approach to motor skill learning in order to get one step closer towards human-like performance. For doing so, we study two ma jor components for such an approach, i.e., firstly, a theoretically well-founded general approach to representing the required control structures for task representation and execution and, secondly, appropriate learning algorithms which can be applied in this setting.

am ei

PDF DOI [BibTex]

2007


PDF DOI [BibTex]


no image
Reinforcement Learning for Optimal Control of Arm Movements

Theodorou, E., Peters, J., Schaal, S.

In Abstracts of the 37st Meeting of the Society of Neuroscience., Neuroscience, 2007, clmc (inproceedings)

Abstract
Every day motor behavior consists of a plethora of challenging motor skills from discrete movements such as reaching and throwing to rhythmic movements such as walking, drumming and running. How this plethora of motor skills can be learned remains an open question. In particular, is there any unifying computa-tional framework that could model the learning process of this variety of motor behaviors and at the same time be biologically plausible? In this work we aim to give an answer to these questions by providing a computational framework that unifies the learning mechanism of both rhythmic and discrete movements under optimization criteria, i.e., in a non-supervised trial-and-error fashion. Our suggested framework is based on Reinforcement Learning, which is mostly considered as too costly to be a plausible mechanism for learning com-plex limb movement. However, recent work on reinforcement learning with pol-icy gradients combined with parameterized movement primitives allows novel and more efficient algorithms. By using the representational power of such mo-tor primitives we show how rhythmic motor behaviors such as walking, squash-ing and drumming as well as discrete behaviors like reaching and grasping can be learned with biologically plausible algorithms. Using extensive simulations and by using different reward functions we provide results that support the hy-pothesis that Reinforcement Learning could be a viable candidate for motor learning of human motor behavior when other learning methods like supervised learning are not feasible.

am ei

[BibTex]

[BibTex]


no image
Reinforcement learning by reward-weighted regression for operational space control

Peters, J., Schaal, S.

In Proceedings of the 24th Annual International Conference on Machine Learning, pages: 745-750, ICML, 2007, clmc (inproceedings)

Abstract
Many robot control problems of practical importance, including operational space control, can be reformulated as immediate reward reinforcement learning problems. However, few of the known optimization or reinforcement learning algorithms can be used in online learning control for robots, as they are either prohibitively slow, do not scale to interesting domains of complex robots, or require trying out policies generated by random search, which are infeasible for a physical system. Using a generalization of the EM-base reinforcement learning framework suggested by Dayan & Hinton, we reduce the problem of learning with immediate rewards to a reward-weighted regression problem with an adaptive, integrated reward transformation for faster convergence. The resulting algorithm is efficient, learns smoothly without dangerous jumps in solution space, and works well in applications of complex high degree-of-freedom robots.

am ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Policy gradient methods for machine learning

Peters, J., Theodorou, E., Schaal, S.

In Proceedings of the 14th INFORMS Conference of the Applied Probability Society, pages: 97-98, Eindhoven, Netherlands, July 9-11, 2007, 2007, clmc (inproceedings)

Abstract
We present an in-depth survey of policy gradient methods as they are used in the machine learning community for optimizing parameterized, stochastic control policies in Markovian systems with respect to the expected reward. Despite having been developed separately in the reinforcement learning literature, policy gradient methods employ likelihood ratio gradient estimators as also suggested in the stochastic simulation optimization community. It is well-known that this approach to policy gradient estimation traditionally suffers from three drawbacks, i.e., large variance, a strong dependence on baseline functions and a inefficient gradient descent. In this talk, we will present a series of recent results which tackles each of these problems. The variance of the gradient estimation can be reduced significantly through recently introduced techniques such as optimal baselines, compatible function approximations and all-action gradients. However, as even the analytically obtainable policy gradients perform unnaturally slow, it required the step from ÔvanillaÕ policy gradient methods towards natural policy gradients in order to overcome the inefficiency of the gradient descent. This development resulted into the Natural Actor-Critic architecture which can be shown to be very efficient in application to motor primitive learning for robotics.

am ei

[BibTex]

[BibTex]


no image
Policy Learning for Motor Skills

Peters, J., Schaal, S.

In Proceedings of 14th International Conference on Neural Information Processing (ICONIP), pages: 233-242, (Editors: Ishikawa, M. , K. Doya, H. Miyamoto, T. Yamakawa), 2007, clmc (inproceedings)

Abstract
Policy learning which allows autonomous robots to adapt to novel situations has been a long standing vision of robotics, artificial intelligence, and cognitive sciences. However, to date, learning techniques have yet to fulfill this promise as only few methods manage to scale into the high-dimensional domains of manipulator robotics, or even the new upcoming trend of humanoid robotics, and usually scaling was only achieved in precisely pre-structured domains. In this paper, we investigate the ingredients for a general approach policy learning with the goal of an application to motor skill refinement in order to get one step closer towards human-like performance. For doing so, we study two major components for such an approach, i.e., firstly, we study policy learning algorithms which can be applied in the general setting of motor skill learning, and, secondly, we study a theoretically well-founded general approach to representing the required control structures for task representation and execution.

am ei

PDF DOI [BibTex]

PDF DOI [BibTex]