Plenary Speakers

Speaker Photo

Sébastien Gros

NTNU Trondheim, Norway
Reinforcement Learning for MPC: From simple ideas to valuable insights
August 21, 16:30-17:30
Detailed information

Speaker Photo

Daniel Quevedo

Queensland University of Technology, Australia
Posterior Sampling for Channel Selection in Networked Estimation and Control
August 22, 09:20-10:20
Detailed information

Speaker Photo

Maryam Kamgarpour

EPF Lausanne, Switzerland
Towards incorporating safety in data-driven control and reinforcement learning
August 23, 09:20-10:20
Detailed information

Speaker Photo

Frank Allgöwer

University of Stuttgart, Germany
Model-based vs. data-based MPC: Which is the way to the promised land?
August 24, 17:00-18:00
Detailed information

Due to unavoidable circumstances, Dr. Scott Kuindersma's plenary talk has been canceled. We apologize for any inconvenience.


Special Guest Speaker

Speaker Photo

Dimitri Bertsekas

Arizona State University and MIT, USA
Model Predictive Control and Reinforcement Learning: A Unified Framework Based on Dynamic Programming
August 24, 09:20-10:20
Detailed information





Sébastien Gros

Date: August 21, 16:30-17:30

Title: Reinforcement Learning for MPC: From simple ideas to valuable insights

Abstract: This talk aims at summarizing the results and insights presented in nearly 50 publications on the combination of Reinforcement Learning (RL) and Model Predictive Control (MPC). This combination was investigated at first as a way to harmoniously merge the two fields into a method inheriting the strengths of both. Two viewpoints are useful here: on one hand RL can be viewed as a powerful toolbox to improve the closed-loop performance of MPC directly, on the other hand MPC can be viewed as a more structured, knowledge-driven and explainable function approximation for RL than Deep Neural Networks. These investigations have shown how to achieve this combination, for which class of problems it is most useful, and where the difficulties are. In addition, they also provide rich insights into the use of MPC as a way of providing optimal control policies, especially when dealing with stochastic problems and economic objectives. In particular, they motivate viewing MPC in a "holistic" way, they raise reflections on the role of the model in MPC, and on model-fitting approaches for data-driven MPC. In this talk, we aim at introducing these main results and insights in a coherent and accessible manner by stripping away the complexity, and focusing on their essence.

Biography: Sebastien Gros took his PhD degree in 2008 at the Automatic Control Lab, EPFL. After a bike trip in full autonomy from Switzerland to the Everest base camp, he worked in the wind industry in 2010-2011. He then joined the Optimal Control group at KU Leuven in 2011 as a postdoc where he worked on numerical optimization methods, and NMPC for complex mechanical applications. He then joined the University of Chalmers in 2013 as an Assistant Professor, where he worked on distributed optimization methods, autonomous driving, vehicle control and traffic optimization. He was promoted to Associate Professor in 2017. He joined the Dept. of Cybernetics at the Norwegian University of Technology (NTNU) in 2019 as a full Professor, and became head of Dept. in 2022. He has been working on learning methods for MPC since 2018.




Daniel Quevedo

Date: August 22, 09:20-10:20

Title: Posterior Sampling for Channel Selection in Networked Estimation and Control

Abstract: The use of machine learning techniques for control has gained increasing attention in recent years. Learning-based estimation and control holds the promise of enabling the solution of problems that are difficult or even intractable using traditional control design techniques. Despite significant progress, several issues, e.g., in relation to stability guarantees, robustness and computational cost, remain. This talk presents some of our recent work on networked control systems with uncertainties and illustrates how posterior sampling techniques can be used for their design. We focus on a basic architecture where sensor measurements and control signals are transmitted over lossy communication channels that introduce random packet dropouts. At any time instant, one out of several available channels can be chosen for transmission. Since channel dropout probabilities are unknown, finding the best channel requires learning from transmission outcomes. We study a scenario where both learning of the channel dropout probabilities and control are carried out simultaneously. Coupling between learning dynamics and control system dynamics raises challenges in relation to stability and performance. To facilitate fast learning we propose to select channels using Bayesian posterior sampling, also called Thompson Sampling. This talk elucidates conditions that guarantee that the resulting system will be stochastically stable and characterizes performance in terms of the control regret.

Biography: Daniel Quevedo received Ingeniero Civil Electrónico and MSc degrees from Universidad Técnica Federico Santa María, Valparaíso, Chile, in 2000, and in 2005 the PhD degree from the University of Newcastle, Australia. He is a Professor of Cyberphysical Systems with the School of Electrical Engineering and Robotics, Queensland University of Technology (QUT), Australia, and will join the University of Sydney in September. Prior to moving to QUT, he established and led the Chair in Automatic Control at Paderborn University, Germany.
  Daniel's research interests are in networked control systems, cyberphysical systems security and control of power converters. He serves as Associate Editor for IEEE Transactions on Control of Networked Systems, for IEEE Control Systems and in the Editorial Board of the International Journal of Robust and Nonlinear Control. From 2015 to 2018 he was Chair of the IEEE Control Systems Society Technical Committee on Networks & Communication Systems. In 2003 he received the IEEE Conference on Decision and Control Best Student Paper Award and was also a finalist in 2002. Prof Quevedo is co-recipient of the 2018 IEEE Transactions on Automatic Control George S. Axelby Outstanding Paper Award. He is a Fellow of the IEEE.




Maryam Kamgarpour

Date: August 23, 09:20-10:20

Title: Towards incorporating safety in data-driven control and reinforcement learning

Abstract: From intelligent transportation systems to human-robot interactions, modern applications of control involve decision-making in the presence of increasing amount of uncertainties. To reliably apply automation in these scenarios, we need to design tractable algorithms with provable performance guarantees. This plenary will highlight our recent developments on advancing stochastic control and reinforcement learning to address these challenges. In the first part, I will discuss our data-driven model-based approaches for ensuring safety constraints of an autonomous system in the presence of environment uncertainties. Building on this result, in the second part, I will focus on our approaches towards incorporating safety in model-free reinforcement learning, to account for more complex system dynamics. The talk will showcase examples of applying the developed theory to transportation systems and robotics.

Biography: Maryam Kamgarpour is a professor in the School of Engineering of École Polytechnique Fédérale de Lausanne. Prior to EPFL, she served as a faculty at the University of British Columbia and at ETH Zürich. She holds a Doctor of Philosophy in Engineering from the University of California, Berkeley and a Bachelor of Applied Science from University of Waterloo, Canada. Her research is on stochastic control and multiagent learning and control. Her theoretical research is motivated by control problems arising in intelligent transportation systems, robotics, and power grid systems. She has received NASA Excellence in Publication Award (2010), European Union Starting Grant (2016-2021), IEEE Transactions on Control of Network Systems Outstanding Paper Award (2022) and the European Control Award (2024).




Frank Allgöwer

Date: August 24, 17:00-18:00

Title: Model-based vs. data-based MPC: Which is the way to the promised land?

Abstract: Recent years have shown rapid progress of learning-based and data-driven methods, significantly impacting the field of control, including model predictive control (MPC). In addition to numerous methodological and computational advancements, a substantial number of application studies featuring data- and learning-based MPC are currently being published. In this talk, we will compare model-based and data-based MPC to explore which holds more potential for future impact. Highlighting recent developments, we will focus on two different data-based MPC schemes: one based on the Fundamental Lemma of Willems et al., and the other on the data-informativity paradigm. By providing an overview and introduction to these methods, we will discuss their theoretical properties, suitability for nonlinear systems, and demonstrate their advantages and limitations compared to model-based MPC through various application examples. This critical analysis and comparison aim to offer insights and recommendations for future research directions in the evolving domain of MPC.

Biography: Frank Allgöwer is director of the Institute for Systems Theory and Automatic Control at the University of Stuttgart in Germany. His current research interests are to develop new methods for data-based control, optimization-based control and networked control. Frank received several recognitions for his work including the IFAC Outstanding Service Award, the IEEE CSS Distinguished Member Award, the State Teaching Award of the German state of Baden-Württemberg, and the Leibniz Prize of the Deutsche Forschungsgemeinschaft. For their work on data-based MPC, Frank and his co-workers received the 2022 IEEE CSS George S. Axelby Outstanding Paper Award for the best paper published in the IEEE Transactions on Automatic Control.
  Frank has been the President of the International Federation of Automatic Control (IFAC) for the years 2017-2020. He was Editor for the journal Automatica from 2001 to 2015 and is editor for the Springer Lecture Notes in Control and Information Science book series and has published over 500 scientific articles. From 2012 until 2020, Frank also served as Vice-President of Germany's most important research funding agency, the German Research Foundation (DFG). From April until September 2024 Frank is WASP Distinguished Guest Professor at Lund University in Sweden.




Dimitri Bertsekas

Date: August 24, 09:20-10:20

Title: Model Predictive Control and Reinforcement Learning: A Unified Framework Based on Dynamic Programming

Abstract: In this paper we describe a new conceptual framework that connects approximate Dynamic Programming (DP), Model Predictive Control (MPC), and Reinforcement Learning (RL). This framework centers around two algorithms, which are designed largely independently of each other and operate in synergy through the powerful mechanism of Newton's method. We call them the off-line training and the on-line play algorithms. The names are borrowed from some of the major successes of RL involving games; primary examples are the recent (2017) AlphaZero program (which plays chess), and the similarly structured and earlier (1990s) TD-Gammon program (which plays backgammon). In these game contexts, the off-line training algorithm is the method used to teach the program how to evaluate positions and to generate good moves at any given position, while the on-line play algorithm is the method used to play in real time against human or computer opponents.
  Significantly, the synergy between off-line training and on-line play also underlies MPC (as well as other major classes of sequential decision problems), and indeed the MPC design architecture is very similar to the one of AlphaZero and TD-Gammon. This conceptual insight provides a vehicle for bridging the cultural gap between RL and MPC, and sheds new light on some fundamental issues in MPC. These include the enhancement of stability properties through rollout, the treatment of uncertainty through the use of certainty equivalence, the resilience of MPC in adaptive control settings that involve changing system parameters, and the insights provided by the superlinear performance bounds implied by Newton's method.

Biography: Dimitri Bertsekas' undergraduate studies were in engineering at the National Technical University of Athens, Greece. He obtained his MS in electrical engineering at the George Washington University, Wash. DC in 1969, and his Ph.D. in system science in 1971 at the Massachusetts Institute of Technology (M.I.T.).
  Dr. Bertsekas has held faculty positions with the Engineering-Economic Systems Dept., Stanford University (1971-1974) and the Electrical Engineering Dept. of the University of Illinois, Urbana (1974-1979). From 1979 to 2019 he was with the Electrical Engineering and Computer Science Department of M.I.T., where he served as McAfee Professor of Engineering. Since 2019 he has been Fulton Professor of Computational Decision Making and a full time faculty member at the School of Computing and Augmented Intelligence at Arizona State University (ASU), Tempe. He has served as a consultant to various private companies, and as editor for several scientific journals. In 1995 he founded a publishing company, Athena Scientific, which has published, among others, all of his books since that time. In 2023 he was appointed Chief Scientific Advisor of Bayforest Technologies, a London-based quantitative investment company.
  Professor Bertsekas' research spans several fields, including optimization, control, large-scale computation, reinforcement learning, and artificial intelligence, and is closely tied to his teaching and book authoring activities. He has written numerous research papers, and twenty books and research monographs, several of which are used as textbooks in MIT and ASU classes.
  Professor Bertsekas was awarded the INFORMS 1997 Prize for Research Excellence in the Interface Between Operations Research and Computer Science for his book "Neuro-Dynamic Programming", the 2001 ACC John R. Ragazzini Education Award, the 2009 INFORMS Expository Writing Award, the 2014 ACC Richard E. Bellman Control Heritage Award for "contributions to the foundations of deterministic and stochastic optimization-based methods in systems and control," the 2014 Khachiyan Prize for Life-Time Accomplishments in Optimization, the SIAM/MOS 2015 George B. Dantzig Prize, and the 2022 IEEE Control Systems Award. Together with his coauthor John Tsitsiklis, he was awarded the 2018 INFORMS John von Neumann Theory Prize, for the contributions of the research monographs "Parallel and Distributed Computation" and "Neuro-Dynamic Programming". In 2001, he was elected to the United States National Academy of Engineering for "pioneering contributions to fundamental research, practice and education of optimization/control theory, and especially its application to data communication networks."
  Dr. Bertsekas' recent books are "Introduction to Probability: 2nd Edition" (2008), "Convex Optimization Theory" (2009), "Dynamic Programming and Optimal Control," Vol. I, (2017), and Vol. II: (2012), "Convex Optimization Algorithms" (2015), "Nonlinear Programming" (2016), "Reinforcement Learning and Optimal Control" (2019), "Rollout, Policy Iteration, Distributed Reinforcement Learning" (2020), "Abstract Dynamic Programming" (2022, 3rd edition), "Lessons from AlphaZero for Optimal, Model Predictive, and Adaptive Control" (2022), and "A Course in Reinforcement Learning" (2023), all published by Athena Scientific.