Buch


Reinforcement Learning for Optimal Feedback Control

Reinforcement Learning for Optimal Feedback Control

-A Lyapunov-Based Approach-

Rushikesh Kamalapurkar; Patrick Walters; Joel Rosenfeld; Warren Dixon

 

171,19 EUR
Lieferzeit 12-13 Tage



171,19 EUR
Lieferzeit 12-13 Tage



Autorinformation
Inhaltsverzeichnis


Übersicht


Verlag : Springer International Publishing
Buchreihe : Communications and Control Engineering
Sprache : Englisch
Erschienen : 28. 05. 2018
Seiten : 293
Einband : Gebunden
Höhe : 235 mm
Breite : 155 mm
ISBN : 9783319783833
Sprache : Englisch

Du und »Reinforcement Learning for Optimal Feedback Control«




Autorinformation


Rushikesh Kamalapurkar received his M.S. and his Ph.D. degree in 2011 and 2014, respectively, from the Mechanical and Aerospace Engineering Department at the University of Florida. After working for a year as a postdoctoral research fellow with Dr. Warren E. Dixon, he was selected as the 2015-16 MAE postdoctoral teaching fellow. In 2016 he joined the School of Mechanical and Aerospace Engineering at the Oklahoma State University as an Assistant professor. His primary research interest has been intelligent, learning-based optimal control of uncertain nonlinear dynamical systems. He has published 3 book chapters, 18 peer reviewed journal papers and 21 peer reviewed conference papers. His work has been recognized by the 2015 University Of Florida Department Of Mechanical and Aerospace Engineering Best Dissertation Award, and the 2014 University of Florida Department of Mechanical and Aerospace Engineering Outstanding Graduate Research Award.Dr. Joel Rosenfeld is a postdoctoral researcher in the Department of Electrical Engineering and Computer Science at Vanderbilt University in the VeriVital Laboratory. He received his PhD in Mathematics at the University of Florida in 2013 under the direction of Dr. Michael T. Jury. His doctoral work concerned densely defined operators over reproducing kernel Hilbert spaces (RKHS), where he established characterizations of densely defined multiplication operators for several RKHSs. Dr. Rosenfeld then spent four years as a postdoctoral researcher in the Nonlinear Controls and Robotics Laboratory under Dr. Warren E. Dixon where he worked on problems in Numerical Analysis and Optimal Control Theory. Working together with Dr. Dixon and Dr. Kamalapurkar, he developed the numerical approach represented by the state following (StaF) method, which enables the implementation of online optimal control methods that were previously intractable.Prof. Warren Dixon received his Ph.D. in 2000 from the Department of Electrical and Computer Engineering from Clemson University. He worked as a research staff member and Eugene P. Wigner Fellow at Oak Ridge National Laboratory (ORNL) until 2004, when he joined the University of Florida in the Mechanical and Aerospace Engineering Department. His main research interest has been the development and application of Lyapunov-based control techniques for uncertain nonlinear systems. He has published 3 books, an edited collection, 13 chapters, and over 130 journal and 240 conference papers. His work has been recognized by the 2015 & 2009 American Automatic Control Council (AACC) O. Hugo Schuck (Best Paper) Award, the 2013 Fred Ellersick Award for Best Overall MILCOM Paper, a 2012-2013 University of Florida College of Engineering Doctoral Dissertation Mentoring Award, the 2011 American Society of Mechanical Engineers (ASME) Dynamics Systems and Control Division Outstanding Young Investigator Award, the 2006 IEEE Robotics and Automation Society (RAS) Early Academic Career Award, an NSF CAREER Award (2006-2011), the 2004 Department of Energy Outstanding Mentor Award, and the 2001 ORNL Early Career Award for Engineering Achievement. He is an ASME and IEEE Fellow, an IEEE Control Systems Society (CSS) Distinguished Lecturer, and has served as the Director of Operations for the Executive Committee of the IEEE CSS Board of Governors (2012-2015). He was awarded the Air Force Commander's Public Service Award (2016) for his contributions to the U.S. Air Force Science Advisory Board. He is currently or formerly an associate editor for ASME Journal of Journal of Dynamic Systems, Measurement and Control, Automatica, IEEE Transactions on Systems Man and Cybernetics: Part B Cybernetics, and the International Journal of Robust and Nonlinear Control.

Inhaltsverzeichnis


Chapter 1. Optimal control.- Chapter 2. Approximate dynamic programming.- Chapter 3. Excitation-based online approximate optimal control.- Chapter 4. Model-based reinforcement learning for approximate optimal control.- Chapter 5. Differential Graphical Games.- Chapter 6. Applications.- Chapter 7. Computational considerations.- Reference.- Index.

Deine Buchhandlung


Buchhandlung LeseLust
Inh. Gernod Siering

Georgenstraße 2
99817 Eisenach

03691/733822
kontakt@leselust-eisenach.de

Montag-Freitag 9-17 Uhr
Sonnabend 10-14 Uhr



Deine Buchhandlung
Buchhandlung LeseLust
Inh. Gernod Siering

Georgenstraße 2
99817 Eisenach

03691/733822
kontakt@leselust-eisenach.de

Montag-Freitag 9-17 Uhr
Sonnabend 10-14 Uhr