Header logo is


2006


no image
Learning operational space control

Peters, J., Schaal, S.

In Robotics: Science and Systems II (RSS 2006), pages: 255-262, (Editors: Gaurav S. Sukhatme and Stefan Schaal and Wolfram Burgard and Dieter Fox), Cambridge, MA: MIT Press, RSS , 2006, clmc (inproceedings)

Abstract
While operational space control is of essential importance for robotics and well-understood from an analytical point of view, it can be prohibitively hard to achieve accurate control in face of modeling errors, which are inevitable in complex robots, e.g., humanoid robots. In such cases, learning control methods can offer an interesting alternative to analytical control algorithms. However, the resulting learning problem is ill-defined as it requires to learn an inverse mapping of a usually redundant system, which is well known to suffer from the property of non-covexity of the solution space, i.e., the learning system could generate motor commands that try to steer the robot into physically impossible configurations. A first important insight for this paper is that, nevertheless, a physically correct solution to the inverse problem does exits when learning of the inverse map is performed in a suitable piecewise linear way. The second crucial component for our work is based on a recent insight that many operational space controllers can be understood in terms of a constraint optimal control problem. The cost function associated with this optimal control problem allows us to formulate a learning algorithm that automatically synthesizes a globally consistent desired resolution of redundancy while learning the operational space controller. From the view of machine learning, the learning problem corresponds to a reinforcement learning problem that maximizes an immediate reward and that employs an expectation-maximization policy search algorithm. Evaluations on a three degrees of freedom robot arm illustrate the feasability of our suggested approach.

am ei

link (url) [BibTex]

2006


link (url) [BibTex]


no image
Reinforcement Learning for Parameterized Motor Primitives

Peters, J., Schaal, S.

In Proceedings of the 2006 International Joint Conference on Neural Networks, pages: 73-80, IJCNN, 2006, clmc (inproceedings)

Abstract
One of the major challenges in both action generation for robotics and in the understanding of human motor control is to learn the "building blocks of movement generation", called motor primitives. Motor primitives, as used in this paper, are parameterized control policies such as splines or nonlinear differential equations with desired attractor properties. While a lot of progress has been made in teaching parameterized motor primitives using supervised or imitation learning, the self-improvement by interaction of the system with the environment remains a challenging problem. In this paper, we evaluate different reinforcement learning approaches for improving the performance of parameterized motor primitives. For pursuing this goal, we highlight the difficulties with current reinforcement learning methods, and outline both established and novel algorithms for the gradient-based improvement of parameterized policies. We compare these algorithms in the context of motor primitive learning, and show that our most modern algorithm, the Episodic Natural Actor-Critic outperforms previous algorithms by at least an order of magnitude. We demonstrate the efficiency of this reinforcement learning method in the application of learning to hit a baseball with an anthropomorphic robot arm.

am ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
An ultrasonic standing-wave-actuated nano-positioning walking robot: piezoelectric-metal composite beam modeling

Son, K. J., Kartik, V., Wickert, J. A., Sitti, M.

Journal of vibration and control, 12(12):1293-1309, Sage Publications, 2006 (article)

pi

[BibTex]

[BibTex]


no image
IEEE TRANSACTIONS ON ROBOTICS

VOLZ, RICHARD A, TARN, TJ, MACIEJEWSKI, ANTHONY A, LEE, SUKHAN, BICCHI, ANTONIO, DE LUCA, ALESSANDRO, LUH, PETER B, TAYLOR, RUSSELL H, BEKEY, GEORGE A, ARAI, HIROHIKO, others

2006 (article)

pi

[BibTex]

[BibTex]


no image
Design methodology for biomimetic propulsion of miniature swimming robots

Behkam, B., Sitti, M.

Trans.-ASME Journal of Dynamic Systems Measurement and Control, 128(1):36, ASME, 2006 (article)

pi

Project Page [BibTex]

Project Page [BibTex]


no image
Augmented reality user interface for an atomic force microscope-based nanorobotic system

Vogl, W., Ma, B. K., Sitti, M.

IEEE transactions on nanotechnology, 5(4):397-406, IEEE, 2006 (article)

pi

[BibTex]

[BibTex]


no image
Friction enhancement via micro-patterned wet elastomer adhesives on small intestinal surfaces

Kwon, J., Cheung, E., Park, S., Sitti, M.

Biomedical Materials, 1(4):216, IOP Publishing, 2006 (article)

pi

[BibTex]

[BibTex]


no image
Statistical Learning of LQG controllers

Theodorou, E.

Technical Report-2006-1, Computational Action and Vision Lab University of Minnesota, 2006, clmc (techreport)

am

PDF [BibTex]

PDF [BibTex]


no image
Miniature endoscopic capsule robot using biomimetic micro-patterned adhesives

Karagozler, M. E., Cheung, E., Kwon, J., Sitti, M.

In Biomedical Robotics and Biomechatronics, 2006. BioRob 2006. The First IEEE/RAS-EMBS International Conference on, pages: 105-111, 2006 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Compliant and low-cost humidity nanosensors using nanoporous polymer membranes

Yang, B., Aksak, B., Lin, Q., Sitti, M.

Sensors and Actuators B: Chemical, 114(1):254-262, Elsevier, 2006 (article)

pi

[BibTex]

[BibTex]


no image
Task-based and stable telenanomanipulation in a nanoscale virtual environment

Kim, S., Sitti, M.

IEEE Transactions on automation science and engineering, 3(3):240-247, IEEE, 2006 (article)

pi

[BibTex]

[BibTex]


no image
Drawing suspended polymer micro-/nanofibers using glass micropipettes

Nain, A. S., Wong, J. C., Amon, C., Sitti, M.

Applied Physics Letters, 89(18):183105, AIP, 2006 (article)

pi

[BibTex]

[BibTex]


no image
Approximate nearest neighbor regression in very high dimensions

Vijayakumar, S., DSouza, A., Schaal, S.

In Nearest-Neighbor Methods in Learning and Vision, pages: 103-142, (Editors: Shakhnarovich, G.;Darrell, T.;Indyk, P.), Cambridge, MA: MIT Press, 2006, clmc (inbook)

am

link (url) [BibTex]

link (url) [BibTex]


no image
Toward micro wall-climbing robots using biomimetic fibrillar adhesives

Greuter, M., Shah, G., Caprari, G., Tâche, F., Siegwart, R., Sitti, M.

In Proceedings of the 3rd International Symposium on Autonomous Minirobots for Research and Edutainment (AMiRE 2005), pages: 39-46, 2006 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Geckobot: A gecko inspired climbing robot using elastomer adhesives

Unver, O., Uneri, A., Aydemir, A., Sitti, M.

In Robotics and Automation, 2006. ICRA 2006. Proceedings 2006 IEEE International Conference on, pages: 2329-2335, 2006 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Towards hybrid swimming microrobots: bacteria assisted propulsion of polystyrene beads

Behkam, B., Sitti, M.

In Engineering in Medicine and Biology Society, 2006. EMBS’06. 28th Annual International Conference of the IEEE, pages: 2421-2424, 2006 (inproceedings)

pi

Project Page [BibTex]

Project Page [BibTex]


no image
Biologically inspired polymer microfibers with spatulate tips as repeatable fibrillar adhesives

Kim, S., Sitti, M.

Applied Physics Letters, 89(26):261911-261911, AIP, 2006 (article)

pi

Project Page [BibTex]


no image
Soft microcontact printing with force control using microrobotic assembly based templates

Tafazzoli, A., Sitti, M.

In Advanced Motion Control, 2006. 9th IEEE International Workshop on, pages: 500-505, 2006 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Modeling of the supporting legs for designing biomimetic water strider robots

Song, Y. S., Suhr, S. H., Sitti, M.

In Robotics and Automation, 2006. ICRA 2006. Proceedings 2006 IEEE International Conference on, pages: 2303-2310, 2006 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Two-dimensional vision-based autonomous microparticle manipulation using a nanoprobe

Pawashe, C., Sitti, M.

Journal of Micromechatronics, 3(3):285-306, Brill, 2006 (article)

pi

[BibTex]

[BibTex]


no image
A novel water running robot inspired by basilisk lizards

Floyd, S., Keegan, T., Palmisano, J., Sitti, M.

In Intelligent Robots and Systems, 2006 IEEE/RSJ International Conference on, pages: 5430-5436, 2006 (inproceedings)

pi

[BibTex]

[BibTex]


no image
A biomimetic climbing robot based on the gecko

Menon, C., Sitti, M.

Journal of Bionic Engineering, 3(3):115-125, 2006 (article)

pi

[BibTex]

[BibTex]


no image
Force-controlled microcontact printing using microassembled particle templates

Tafazzoli, A., Pawashe, C., Sitti, M.

In Robotics and Automation, 2006. ICRA 2006. Proceedings 2006 IEEE International Conference on, pages: 263-268, 2006 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Proximal probes based nanorobotic drawing of polymer micro/nanofibers

Nain, A. S., Amon, C., Sitti, M.

IEEE transactions on nanotechnology, 5(5):499-510, IEEE, 2006 (article)

pi

[BibTex]

[BibTex]


no image
Waalbot: An agile small-scale wall climbing robot utilizing pressure sensitive adhesives

Murphy, M. P., Tso, W., Tanzini, M., Sitti, M.

In Intelligent Robots and Systems, 2006 IEEE/RSJ International Conference on, pages: 3411-3416, 2006 (inproceedings)

pi

[BibTex]

[BibTex]

1997


no image
Locally weighted learning

Atkeson, C. G., Moore, A. W., Schaal, S.

Artificial Intelligence Review, 11(1-5):11-73, 1997, clmc (article)

Abstract
This paper surveys locally weighted learning, a form of lazy learning and memory-based learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, assessing predictions, handling noisy data and outliers, improving the quality of predictions by tuning fit parameters, interference between old and new data, implementing locally weighted learning efficiently, and applications of locally weighted learning. A companion paper surveys how locally weighted learning can be used in robot learning and control. Keywords: locally weighted regression, LOESS, LWR, lazy learning, memory-based learning, least commitment learning, distance functions, smoothing parameters, weighting functions, global tuning, local tuning, interference.

am

link (url) [BibTex]

1997


link (url) [BibTex]


no image
Locally weighted learning for control

Atkeson, C. G., Moore, A. W., Schaal, S.

Artificial Intelligence Review, 11(1-5):75-113, 1997, clmc (article)

Abstract
Lazy learning methods provide useful representations and training algorithms for learning about complex phenomena during autonomous adaptive control of complex systems. This paper surveys ways in which locally weighted learning, a type of lazy learning, has been applied by us to control tasks. We explain various forms that control tasks can take, and how this affects the choice of learning paradigm. The discussion section explores the interesting impact that explicitly remembering all previous experiences has on the problem of learning to control. Keywords: locally weighted regression, LOESS, LWR, lazy learning, memory-based learning, least commitment learning, forward models, inverse models, linear quadratic regulation (LQR), shifting setpoint algorithm, dynamic programming.

am

link (url) [BibTex]

link (url) [BibTex]


no image
Learning from demonstration

Schaal, S.

In Advances in Neural Information Processing Systems 9, pages: 1040-1046, (Editors: Mozer, M. C.;Jordan, M.;Petsche, T.), MIT Press, Cambridge, MA, 1997, clmc (inproceedings)

Abstract
By now it is widely accepted that learning a task from scratch, i.e., without any prior knowledge, is a daunting undertaking. Humans, however, rarely attempt to learn from scratch. They extract initial biases as well as strategies how to approach a learning problem from instructions and/or demonstrations of other humans. For learning control, this paper investigates how learning from demonstration can be applied in the context of reinforcement learning. We consider priming the Q-function, the value function, the policy, and the model of the task dynamics as possible areas where demonstrations can speed up learning. In general nonlinear learning problems, only model-based reinforcement learning shows significant speed-up after a demonstration, while in the special case of linear quadratic regulator (LQR) problems, all methods profit from the demonstration. In an implementation of pole balancing on a complex anthropomorphic robot arm, we demonstrate that, when facing the complexities of real signal processing, model-based reinforcement learning offers the most robustness for LQR problems. Using the suggested methods, the robot learns pole balancing in just a single trial after a 30 second long demonstration of the human instructor. 

am

link (url) [BibTex]

link (url) [BibTex]


no image
Robot learning from demonstration

Atkeson, C. G., Schaal, S.

In Machine Learning: Proceedings of the Fourteenth International Conference (ICML ’97), pages: 12-20, (Editors: Fisher Jr., D. H.), Morgan Kaufmann, Nashville, TN, July 8-12, 1997, 1997, clmc (inproceedings)

Abstract
The goal of robot learning from demonstration is to have a robot learn from watching a demonstration of the task to be performed. In our approach to learning from demonstration the robot learns a reward function from the demonstration and a task model from repeated attempts to perform the task. A policy is computed based on the learned reward function and task model. Lessons learned from an implementation on an anthropomorphic robot arm using a pendulum swing up task include 1) simply mimicking demonstrated motions is not adequate to perform this task, 2) a task planner can use a learned model and reward function to compute an appropriate policy, 3) this model-based planning process supports rapid learning, 4) both parametric and nonparametric models can be learned and used, and 5) incorporating a task level direct learning component, which is non-model-based, in addition to the model-based planner, is useful in compensating for structural modeling errors and slow model learning. 

am

link (url) [BibTex]

link (url) [BibTex]


no image
Local dimensionality reduction for locally weighted learning

Vijayakumar, S., Schaal, S.

In International Conference on Computational Intelligence in Robotics and Automation, pages: 220-225, Monteray, CA, July10-11, 1997, 1997, clmc (inproceedings)

Abstract
Incremental learning of sensorimotor transformations in high dimensional spaces is one of the basic prerequisites for the success of autonomous robot devices as well as biological movement systems. So far, due to sparsity of data in high dimensional spaces, learning in such settings requires a significant amount of prior knowledge about the learning task, usually provided by a human expert. In this paper we suggest a partial revision of the view. Based on empirical studies, it can been observed that, despite being globally high dimensional and sparse, data distributions from physical movement systems are locally low dimensional and dense. Under this assumption, we derive a learning algorithm, Locally Adaptive Subspace Regression, that exploits this property by combining a local dimensionality reduction as a preprocessing step with a nonparametric learning technique, locally weighted regression. The usefulness of the algorithm and the validity of its assumptions are illustrated for a synthetic data set and data of the inverse dynamics of an actual 7 degree-of-freedom anthropomorphic robot arm.

am

link (url) [BibTex]

link (url) [BibTex]


no image
Learning tasks from a single demonstration

Atkeson, C. G., Schaal, S.

In IEEE International Conference on Robotics and Automation (ICRA97), 2, pages: 1706-1712, Piscataway, NJ: IEEE, Albuquerque, NM, 20-25 April, 1997, clmc (inproceedings)

Abstract
Learning a complex dynamic robot manoeuvre from a single human demonstration is difficult. This paper explores an approach to learning from demonstration based on learning an optimization criterion from the demonstration and a task model from repeated attempts to perform the task, and using the learned criterion and model to compute an appropriate robot movement. A preliminary version of the approach has been implemented on an anthropomorphic robot arm using a pendulum swing up task as an example

am

link (url) [BibTex]

link (url) [BibTex]

1992


no image
Ins CAD integrierte Kostenkalkulation (CAD-Integrated Cost Calculation)

Ehrlenspiel, K., Schaal, S.

Konstruktion 44, 12, pages: 407-414, 1992, clmc (article)

am

[BibTex]

1992


[BibTex]


no image
Integrierte Wissensverarbeitung mit CAD am Beispiel der konstruktionsbegleitenden Kalkulation (Ways to smarter CAD Systems)

Schaal, S.

Hanser 1992. (Konstruktionstechnik München Band 8). Zugl. München: TU Diss., München, 1992, clmc (book)

am

[BibTex]

[BibTex]


no image
Informationssysteme mit CAD (Information systems within CAD)

Schaal, S.

In CAD/CAM Grundlagen, pages: 199-204, (Editors: Milberg, J.), Springer, Buchreihe CIM-TT. Berlin, 1992, clmc (inbook)

am

[BibTex]

[BibTex]


no image
What should be learned?

Schaal, S., Atkeson, C. G., Botros, S.

In Proceedings of Seventh Yale Workshop on Adaptive and Learning Systems, pages: 199-204, New Haven, CT, May 20-22, 1992, clmc (inproceedings)

am

[BibTex]

[BibTex]