From Nonlinear through Hybrid to Stochastic Systems and Control
A Workshop in Honor of Andrew R. Teel's 60th Birthday
Goal of the Workshop
This workshop celebrates the 60th birthday of Professor Andrew R. Teel, whose groundbreaking contributions have had a profound impact on the field of control theory, particularly in the areas of hybrid systems, nonlinear control, and stability theory. The workshop will bring together leading researchers who have been influenced or inspired by Teel’s work. Each speaker will present state-of-the-art results that connect to themes he has developed or championed, including hybrid systems, robust and optimal control, multi-agent dynamics, and control with limited information.
Organizers
• Rafal Goebel, Loyola University
• Jorge Poveda, University of California at San Diego
• Ricardo Sanfelice, University of California at Santa Cruz
List of Confirmed Speakers
• Alessandro Astolfi, Imperial College London
• Maurice Heemels, Technische Universiteit Eindhoven
• Joao Hespanha, University of California at Santa Barbara
• Daniel Liberzon, University of Illinois Urbana-Champaign
• Lorenzo Marconi, University of Bologna
• Dragan Nesic, University of Melbourne
• Jorge Poveda, University of California at San Diego
• Ricardo Sanfelice, University of California at Santa Cruz
• Hyungbo Shim, Seoul National University
• Eduardo Sontag, Northeastern University
• Luca Zaccarian, LAAS-CNRS and University of Trento
Talk Abstracts
Steady-State Optimal Filtering for Linear and Nonlinear Systems, by Alessandro Astolfi
The steady-state optimal filtering problem for linear systems is revisited with the objective of establishing further insights on the structure of the underlying solution. It is shown that, in addition to the invariance property of a suitably defined hyperplane, the optimal filter is related to a triangularizing change of coordinates for certain Hamiltonian dynamics associated to the filtering problem. The implication of the above observation is twofold. First, the novel interpretation admits a conceptually straightforward counterpart in the nonlinear setting in terms of invariant distributions. The latter then permit the design of steady state optimal filters for nonlinear systems that only rely upon the solutions of linear partial differential equations, the solution of which is independent from the specific time history of the measured output. The conditions may be further leveraged to determine a polynomial algebraic equation, which characterizes the solution of the filtering problem and which is expressed in the entries of the optimal filter gain alone, hence circumventing the need for solving the underlying Riccati equation.
Neuromorphic control through the eyes of hybrid systems, by Maurice Heemels
Neuromorphic engineering is an emerging research domain that aims to realize important implementation advantages that brain-inspired technologies can offer over classical digital technologies, including energy efficiency, adaptability, and robustness. Also, for future control systems neuromorphic engineering offers potential advantages, although systematic methods for the design of neuromorphic controllers are currently lacking. In this talk, we discuss ideas towards such design method for classes of neuromorphic controllers taking inspiration from event-based control and hybrid systems tools. We discuss case studies in rhythmic control, thermal systems, and nuclear fusion.
Reinforcement Learning for Large-Scale Games, by Joao Hespanha
This talk addresses the use of reinforcement learning in two-player zero-sum Markov games with finite but large state spaces, for which the goal is to find minimax policies with “modest”' computation. We use the qualifier “modest” to mean that we seek to certify policies as optimal without exploring the full state-space of the game. The approach followed is strongly motivated by Q-learning, which was proposed in the late 1980s to extend the single-player dynamic programming principle to model-free reinforcement learning by eliminating the need for a known transition model. Extensions of Q-learning to two-player zero-sum games appeared shortly after. Since then, most of the work devoted to proving correctness of Q-learning relies on establishing that its iteration converges to a unique fixed-point of a Bellman-like equation, which generally requires exploring the full state-space. We will see that, for zero-sum games, it is possible to construct provably correct optimal policies using algorithms inspired by Q-learning, without requiring convergence of the Q function over the whole state-space. In fact, the samples used to update the Q-function may not even explore the whole set of reachable states and, for certain classes of games, the fraction of explored states gets smaller and smaller as the size of the state-space increases.
Localization and Mapping with Coarse Information, by Daniel Liberzon
We will discuss information requirements for control-enabling tasks such as state estimation, localization and mapping in an unknown environment. Through examples, we will study simultaneous localization and mapping based on a binary signal generated by an unknown landmark. We will employ the classical observability decomposition for linear systems, as well as its differential-geometric counterpart for nonlinear systems, to gain a better understanding of possibilities and limitations inherent to these tasks.
Robust Control with Bézier Curves: Geometry Changes, Control Holds, by Lorenzo Marconi
This talk investigates the problem of robust output regulation for exogenous signals represented using Bézier curves. We propose a novel steady-state framework grounded in the Bézier formulation and design a robust regulator composed of a chain of N integrators, where N corresponds to the number of control points defining the curve. The closed-loop regulation guarantees error convergence within practical bounds, while the open-loop formulation enables dynamic adjustments to both the geometry and timing of the target curve. A key result of our analysis reveals that the regulation error is inversely proportional to a generalized notion of distance, emphasizing the inherent spatial localization properties of Bézier curves. This feature ensures that model adaptations remain efficient and stable, even when significant modifications are made to remote regions of the signal. Furthermore, the proposed tool is demonstrated to be effective in the context of optimal command governor control strategies, broadening its applicability in advanced control design.
Stability of Optimal and Near-Optimal Control Laws, by Dragan Nesic
This talk presents an overview of our recent results on stability of various optimal and near-optimal control laws for nonlinear systems that is highly influenced by a technique proposed by Andy Teel and his team in early 2000s for analysis of stabilizing properties of MPC. Our work lies in the intersection of (approximate) dynamic programming and Lyapunov stability theory. The talk will start with discounted optimal control and we will state conditions under which the closed-loop system with the optimal controller is stabilizing. Both stochastic and deterministic results will be highlighted. Then we will present a summary of our recent results on the classical approximate dynamic programming algorithms for nonlinear deterministic systems: value iteration (VI) and policy iteration (PI). VI and PI provide a foundation for a range of optimal control, planning, and reinforcement learning algorithms. Novel applications of VI-based control algorithms, such as reinforcement learning, in safety-critical systems imposes extra requirements on control laws generated by VI – in particular, stability and safety. We will discuss results on stability, robustness and near optimality of VI-based control laws to address such questions. Finally, if the time permits, a summary of our recent work on PI, which proposes a new type of PI algorithm that guarantees its recursive feasibility and stability, will be presented.
Prescribed-Time and Fixed-Time Stability in Hybrid Dynamic Inclusions, by Jorge Poveda
We study the properties of prescribed-time stability (PT-S) and Fixed-Time Stability (FxTs) in hybrid dynamic inclusions. The PT-S property is induced via a class of dynamic gains that generate finite escape times that coincide with the time at which the main state of the system is desired to converge to a given compact set. On the other hand, the FxTs property is induced by incorporating a class of non-Lipschitz dynamics into the flows of the hybrid system. It is shown that both properties can be certified via hybrid Lyapunov functions. Different applications are presented to illustrate the main results.
Feasibility and Regularity of Barrier and Lyapunov-Based Controllers for Dynamical Systems, by Ricardo Sanfelice
This talk presents recent advances in converse theorems for safety and stabilization using barrier and Lyapunov functions, with an emphasis on their feasibility and regularity properties. For continuous-time systems modeled as differential inclusions, we show via counterexamples that autonomous and continuous barrier functions may fail to exist even for smooth and safe systems. Motivated by converse Lyapunov theorems, we establish the necessity and sufficiency of time-varying barrier functions under mild assumptions, constructing such functions using marginal reachability-based formulations and nonsmooth analysis. We further demonstrate that these constructions inherit regularity from the system and, in the smooth case, imply the existence of smooth barrier certificates. In the hybrid setting, we examine the existence of smooth control Lyapunov functions for asymptotically stabilizable compact sets, and derive conditions under which continuous state-feedback laws exist. Finally, we propose minimum-norm controllers for hybrid systems by selecting inputs that minimally decrease Lyapunov functions across flows and jumps. These results align with the workshop’s focus on optimization-based controllers, providing foundational insights into when such controllers exist and how their regularity can be guaranteed.
Emergence, Robustness, and Adaptation in Heterogeneous Multi-Agent Systems, by Hyungbo Shim
We introduce the blended dynamics theorems in various forms, which clarify how enforced synchronization against heterogeneity in a networked dynamical system gives rise to emergent behavior. This finding naturally leads to engineering applications such as distributed state estimation and distributed optimization. The blended dynamics theorems also reveal how robustness emerges in multi-agent systems under various sources of uncertainty. Finally, we illustrate how enforced synchronization induces adaptation in multi-agent systems.
The Model Recovery Anti-Windup Paradigm: An Essential Winding Roadmap, by Lucca Zaccarian
Old historical legends provide controversial explanation of the "anti-winup" paradigm proposed by Andy Teel in his visionary ECC 1997 twin papers. Some say he was trying to resolve some unclear addiction with mathematics. Some say it was just a typo. One way or another, I had the fortune and pleasure to be around when Andy was trying to make this paradigm work in the bugged test-bench that Newport Corporation gave him, where he should have solved an "inconceivable" saturation problem (whatever that means). A few years later, after solving that problem, we renamed that control paradigm as Model Recovery Anti-Windup (MRAW). In this talk, I will explain its core philosophy, its main properties, and illustrate a few successful applications with nonlinear saturated plants subject to windup issues.
