Upgrade to Pro — share decks privately, control downloads, hide ads and more …

A Tutorial on Data-Enabled Predictive Control (...

Florian Dörfler
October 29, 2024
170

A Tutorial on Data-Enabled Predictive Control (DeePC)

Presented as a three hour tutorial at ICCAS, 2024, Jeju Island, Korea

Florian Dörfler

October 29, 2024
Tweet

Transcript

  1. Acknowledgements Jeremy Coulson Ivan Markovsky & Alberto Padoan + many

    many others Linbin Huang Andras Sasfi John Lygeros Alessandro Chiuso Roy Smith Keith Mo at 3/57
  2. Thoughts on data in control systems increasing role of data-centric

    methods in science / engineering / industry due to • methodological advances in statistics, optimization, & machine learning (ML) • unprecedented availability of brute force: deluge of data & computational power • ...and frenzy surrounding big data & ML 4/57
  3. Thoughts on data in control systems increasing role of data-centric

    methods in science / engineering / industry due to • methodological advances in statistics, optimization, & machine learning (ML) • unprecedented availability of brute force: deluge of data & computational power • ...and frenzy surrounding big data & ML Make up your own opinion, but ML works too well to be ignored 4/57
  4. Thoughts on data in control systems increasing role of data-centric

    methods in science / engineering / industry due to • methodological advances in statistics, optimization, & machine learning (ML) • unprecedented availability of brute force: deluge of data & computational power • ...and frenzy surrounding big data & ML Make up your own opinion, but ML works too well to be ignored – also in control ?!? 4/57
  5. Thoughts on data in control systems increasing role of data-centric

    methods in science / engineering / industry due to • methodological advances in statistics, optimization, & machine learning (ML) • unprecedented availability of brute force: deluge of data & computational power • ...and frenzy surrounding big data & ML Make up your own opinion, but ML works too well to be ignored – also in control ?!? “ One of the major developments in control over the past decade – & one of the most important moving forward – is the interaction of ML & control systems. ” [CSS roadmap] 4/57
  6. Thoughts on data in control systems increasing role of data-centric

    methods in science / engineering / industry due to • methodological advances in statistics, optimization, & machine learning (ML) • unprecedented availability of brute force: deluge of data & computational power • ...and frenzy surrounding big data & ML Make up your own opinion, but ML works too well to be ignored – also in control ?!? “ One of the major developments in control over the past decade – & one of the most important moving forward – is the interaction of ML & control systems. ” [CSS roadmap] 4/57
  7. Thoughts on data in control systems increasing role of data-centric

    methods in science / engineering / industry due to • methodological advances in statistics, optimization, & machine learning (ML) • unprecedented availability of brute force: deluge of data & computational power • ...and frenzy surrounding big data & ML Make up your own opinion, but ML works too well to be ignored – also in control ?!? “ One of the major developments in control over the past decade – & one of the most important moving forward – is the interaction of ML & control systems. ” [CSS roadmap] 4/57
  8. Approaches to data-driven control • indirect data-driven control via models:

    data SysID ! model + uncertainty ! control data-driven control u2 u1 y1 y2 5/57
  9. Approaches to data-driven control • indirect data-driven control via models:

    data SysID ! model + uncertainty ! control • growing trend: direct data-driven control by-passing models ...(again) hyped, why ? data-driven control u2 u1 y1 y2 5/57
  10. Approaches to data-driven control • indirect data-driven control via models:

    data SysID ! model + uncertainty ! control • growing trend: direct data-driven control by-passing models ...(again) hyped, why ? data-driven control u2 u1 y1 y2 Central promise: It is often easier to learn a control policy from data rather than a model. 5/57
  11. Approaches to data-driven control • indirect data-driven control via models:

    data SysID ! model + uncertainty ! control • growing trend: direct data-driven control by-passing models ...(again) hyped, why ? data-driven control u2 u1 y1 y2 Central promise: It is often easier to learn a control policy from data rather than a model. Example 1973: autotuned PID 5/57
  12. Approaches to data-driven control • indirect data-driven control via models:

    data SysID ! model + uncertainty ! control • growing trend: direct data-driven control by-passing models ...(again) hyped, why ? The direct approach is viable alternative • for some applications : model-based approach is too complex to be useful ! too complex models, environments, sensing modalities, specifications (e.g., wind farm) data-driven control u2 u1 y1 y2 Central promise: It is often easier to learn a control policy from data rather than a model. Example 1973: autotuned PID 5/57
  13. Approaches to data-driven control • indirect data-driven control via models:

    data SysID ! model + uncertainty ! control • growing trend: direct data-driven control by-passing models ...(again) hyped, why ? The direct approach is viable alternative • for some applications : model-based approach is too complex to be useful ! too complex models, environments, sensing modalities, specifications (e.g., wind farm) • due to (well-known) shortcomings of ID ! too cumbersome, models not identified for control, incompatible uncertainty estimates, ... data-driven control u2 u1 y1 y2 Central promise: It is often easier to learn a control policy from data rather than a model. Example 1973: autotuned PID 5/57
  14. Approaches to data-driven control • indirect data-driven control via models:

    data SysID ! model + uncertainty ! control • growing trend: direct data-driven control by-passing models ...(again) hyped, why ? The direct approach is viable alternative • for some applications : model-based approach is too complex to be useful ! too complex models, environments, sensing modalities, specifications (e.g., wind farm) • due to (well-known) shortcomings of ID ! too cumbersome, models not identified for control, incompatible uncertainty estimates, ... • when brute force data/compute available data-driven control u2 u1 y1 y2 Central promise: It is often easier to learn a control policy from data rather than a model. Example 1973: autotuned PID 5/57
  15. Abstraction reveals pros & cons indirect (model-based) data-driven control minimize

    control cost u, x subject to u, x satisfy state-space model 6/57
  16. Abstraction reveals pros & cons indirect (model-based) data-driven control minimize

    control cost u, x subject to u, x satisfy state-space model where x estimated from u, y & model 6/57
  17. Abstraction reveals pros & cons indirect (model-based) data-driven control minimize

    control cost u, x subject to u, x satisfy state-space model where x estimated from u, y & model where model identified from ud, yd data 6/57 jopostoutpso
  18. Abstraction reveals pros & cons indirect (model-based) data-driven control minimize

    control cost u, x subject to u, x satisfy state-space model where x estimated from u, y & model where model identified from ud, yd data ! nested multi-level optimization problem ) outer optimization o middle opt. o inner opt. 6/57
  19. Abstraction reveals pros & cons indirect (model-based) data-driven control minimize

    control cost u, x subject to u, x satisfy state-space model where x estimated from u, y & model where model identified from ud, yd data ! nested multi-level optimization problem ) outer optimization o middle opt. o inner opt. 9 > > = > > ; separation & certainty equivalence (! LQG case) 6/57
  20. Abstraction reveals pros & cons indirect (model-based) data-driven control minimize

    control cost u, x subject to u, x satisfy state-space model where x estimated from u, y & model where model identified from ud, yd data ! nested multi-level optimization problem ) outer optimization o middle opt. o inner opt. 9 > > = > > ; separation & certainty equivalence (! LQG case) no separation (! ID-4-control) 6/57
  21. Abstraction reveals pros & cons indirect (model-based) data-driven control minimize

    control cost u, x subject to u, x satisfy state-space model where x estimated from u, y & model where model identified from ud, yd data ! nested multi-level optimization problem ) outer optimization o middle opt. o inner opt. 9 > > = > > ; separation & certainty equivalence (! LQG case) no separation (! ID-4-control) direct (black-box) data-driven control minimize control cost u, y subject to u, y consistent with ud, yd data 6/57
  22. Abstraction reveals pros & cons indirect (model-based) data-driven control minimize

    control cost u, x subject to u, x satisfy state-space model where x estimated from u, y & model where model identified from ud, yd data ! nested multi-level optimization problem ) outer optimization o middle opt. o inner opt. 9 > > = > > ; separation & certainty equivalence (! LQG case) no separation (! ID-4-control) direct (black-box) data-driven control minimize control cost u, y subject to u, y consistent with ud, yd data ! trade-o s modular vs. end-2-end suboptimal (?) vs. optimal convex vs. non-convex (?) 6/57
  23. Abstraction reveals pros & cons indirect (model-based) data-driven control minimize

    control cost u, x subject to u, x satisfy state-space model where x estimated from u, y & model where model identified from ud, yd data ! nested multi-level optimization problem ) outer optimization o middle opt. o inner opt. 9 > > = > > ; separation & certainty equivalence (! LQG case) no separation (! ID-4-control) direct (black-box) data-driven control minimize control cost u, y subject to u, y consistent with ud, yd data ! trade-o s modular vs. end-2-end suboptimal (?) vs. optimal convex vs. non-convex (?) Additionally: account for uncertainty (hard to propagate in indirect approach) 6/57
  24. Indirect (models) vs. direct (data) • models are useful for

    design & beyond • modular ! easy to debug & interpret • id = noise filtering • id = projection on model class • harder to propagate uncertainty through id • no robust separation principle ! suboptimal • ... ? x+ = f(x, u) y = h(x, u) y u • some models too complex to be useful • end-to-end ! suit- able for non-experts • design handles noise • harder to inject side info but no bias error • transparent: no unmodeled dynamics • possibly optimal but often less tractable • ... 7/57
  25. Indirect (models) vs. direct (data) • models are useful for

    design & beyond • modular ! easy to debug & interpret • id = noise filtering • id = projection on model class • harder to propagate uncertainty through id • no robust separation principle ! suboptimal • ... ? x+ = f(x, u) y = h(x, u) y u • some models too complex to be useful • end-to-end ! suit- able for non-experts • design handles noise • harder to inject side info but no bias error • transparent: no unmodeled dynamics • possibly optimal but often less tractable • ... lots of pros, cons, counterexamples, & no universal conclusions [discussion] 7/57
  26. A direct approach: dictionary + MPC 1 trajectory dictionary learning

    • motion primitives / basis functions • theory: Koopman & Liouville practice: (E)DMD & particles 2 MPC optimizing over dictionary span 8/57
  27. A direct approach: dictionary + MPC 1 trajectory dictionary learning

    • motion primitives / basis functions • theory: Koopman & Liouville practice: (E)DMD & particles 2 MPC optimizing over dictionary span ! huge theory vs. practice gap 8/57
  28. A direct approach: dictionary + MPC 1 trajectory dictionary learning

    • motion primitives / basis functions • theory: Koopman & Liouville practice: (E)DMD & particles 2 MPC optimizing over dictionary span ! huge theory vs. practice gap ! back to basics: impulse response y4 y2 y1 y3 y5 y6 y7 u1 = u2 = · · · = 0 u0 = 1 x0 =0 y0 8/57 u(t) = So L TI
  29. A direct approach: dictionary + MPC 1 trajectory dictionary learning

    • motion primitives / basis functions • theory: Koopman & Liouville practice: (E)DMD & particles 2 MPC optimizing over dictionary span ! huge theory vs. practice gap ! back to basics: impulse response y4 y2 y1 y3 y5 y6 y7 u1 = u2 = · · · = 0 u0 = 1 x0 =0 y0 8/57 y(t) =
  30. y4 y2 y1 y3 y5 y6 y7 u1 = u2

    = · · · = 0 u0 = 1 x0 =0 y0 9/57
  31. y4 y2 y1 y3 y5 y6 y7 u1 = u2

    = · · · = 0 u0 = 1 x0 =0 y0 Now what if we had the impulse response recorded in our data-library? ⇥ g0 g1 g2 . . . ⇤ = ⇥ yd 0 yd 1 yd 2 . . . ⇤ 9/57 ↓ data : in pulse response g(t) = "all solutions can be parameterized by data"
  32. y4 y2 y1 y3 y5 y6 y7 u1 = u2

    = · · · = 0 u0 = 1 x0 =0 y0 Now what if we had the impulse response recorded in our data-library? ⇥ g0 g1 g2 . . . ⇤ = ⇥ yd 0 yd 1 yd 2 . . . ⇤ yfuture(t) = ⇥ yd 0 yd 1 yd 2 . . . ⇤ · 2 6 6 6 4 ufuture(t) ufuture(t 1) ufuture(t 2) . . . 3 7 7 7 5 9/57
  33. y4 y2 y1 y3 y5 y6 y7 u1 = u2

    = · · · = 0 u0 = 1 x0 =0 y0 Now what if we had the impulse response recorded in our data-library? ⇥ g0 g1 g2 . . . ⇤ = ⇥ yd 0 yd 1 yd 2 . . . ⇤ ! dynamic matrix control (Shell, 1970s): predictive control from raw data yfuture(t) = ⇥ yd 0 yd 1 yd 2 . . . ⇤ · 2 6 6 6 4 ufuture(t) ufuture(t 1) ufuture(t 2) . . . 3 7 7 7 5 9/57
  34. y4 y2 y1 y3 y5 y6 y7 u1 = u2

    = · · · = 0 u0 = 1 x0 =0 y0 Now what if we had the impulse response recorded in our data-library? ⇥ g0 g1 g2 . . . ⇤ = ⇥ yd 0 yd 1 yd 2 . . . ⇤ ! dynamic matrix control (Shell, 1970s): predictive control from raw data yfuture(t) = ⇥ yd 0 yd 1 yd 2 . . . ⇤ · 2 6 6 6 4 ufuture(t) ufuture(t 1) ufuture(t 2) . . . 3 7 7 7 5 today : arbitrary, finite, & corrupted data, ...stochastic & nonlinear ? 9/57
  35. Today’s menu 1. behavioral system theory: fundamental lemma 2. DeePC

    : data-enabled predictive control 3. robustification via salient regularizations 4. cases studies from wind & power systems 10/57
  36. Today’s menu 1. behavioral system theory: fundamental lemma 2. DeePC

    : data-enabled predictive control 3. robustification via salient regularizations 4. cases studies from wind & power systems blooming literature (2-3 ArXiv / week) 10/57 5 #
  37. Today’s menu 1. behavioral system theory: fundamental lemma 2. DeePC

    : data-enabled predictive control 3. robustification via salient regularizations 4. cases studies from wind & power systems blooming literature (2-3 ArXiv / week) ! tutorial [link] to get started • [link] to graduate school material • [link] to survey • [link] to related bachelor lecture • [link] to related publications DATA-DRIVEN CONTROL BASED ON BEHAVIORAL APPROACH: FROM THEORY TO APPLICATIONS IN POWER SYSTEMS Ivan Markovsky, Linbin Huang, and Florian Dörfler I. Markovsky is with ICREA, Pg. Lluis Companys 23, Barcelona, and CIMNE, Gran Capitàn, Barcelona, Spain (e-mail: [email protected]), L. Huang and F. Dörfler are with the Automatic Control Laboratory, ETH Zürich, 8092 Zürich, Switzerland (e-mails: [email protected], dorfl[email protected]). modeling). Modeling using observed data, possibly incorporating some prior knowledge from the physical laws (that is, black-box 10/57
  38. Organization of this lecture • I will teach the basics

    & provide pointers to more sophisticated research material ! study cutting-edge papers yourself • it’s a school: so we will spend time on the board ! take notes • We teach this material also in the ETH Z¨ urich bachelor & have plenty of background material + implementation experience ! please reach out to me or Saverio if you need anything • we will take a break after 90 minutes ! co ee , 11/57
  39. Preview complex 4-area power system: large (n=208), few sensors (8),

    nonlinear, noisy, sti , input constraints, & decentralized control specifications control objective: oscillation damping !"#$ !"#% !"#& !"#' ()*+#$ ()*+#% !"#, !"#- !"#. !"#/ ()*+#& ()*+#' $ , % ' & 0 / - . $1 $$ $% $& $' $, $0 $- $. $/ %1 234*#$5, 234*#%5, 234*#,5- 234*#-5.5$ 234*#-5.5% 234*#.5/5$ 234*#.5/5% 234*#/50 234*#05& 234*#05' 234*#-5$1 234*#$%5%1 234*#/5$/ 234*#$$5$, 234*#$%5$, 234*#$,5$- 234*#$-5$.5$ 234*#$-5$.5% 234*#$.5$/5$ 234*#$.5$/5% 234*#$/5$0 234*#$05$& 234*#$05$' 6!758697 !:+:3;4#$ 6!758697 !:+:3;4#% 7;4:);<#!3=4+<> 7;4:);<#!3=4+<> !?>:*@ A+):3:3;434= 2;+B#$ 2;+B#% 2;+B#& 2;+B#' ! " #! #" !&! !&$ !&' !&( 10 time (s) uncontrolled flow (p.u.) 12/57
  40. Preview complex 4-area power system: large (n=208), few sensors (8),

    nonlinear, noisy, sti , input constraints, & decentralized control specifications control objective: oscillation damping !"#$ !"#% !"#& !"#' ()*+#$ ()*+#% !"#, !"#- !"#. !"#/ ()*+#& ()*+#' $ , % ' & 0 / - . $1 $$ $% $& $' $, $0 $- $. $/ %1 234*#$5, 234*#%5, 234*#,5- 234*#-5.5$ 234*#-5.5% 234*#.5/5$ 234*#.5/5% 234*#/50 234*#05& 234*#05' 234*#-5$1 234*#$%5%1 234*#/5$/ 234*#$$5$, 234*#$%5$, 234*#$,5$- 234*#$-5$.5$ 234*#$-5$.5% 234*#$.5$/5$ 234*#$.5$/5% 234*#$/5$0 234*#$05$& 234*#$05$' 6!758697 !:+:3;4#$ 6!758697 !:+:3;4#% 7;4:);<#!3=4+<> 7;4:);<#!3=4+<> !?>:*@ A+):3:3;434= 2;+B#$ 2;+B#% 2;+B#& 2;+B#' control control ! " #! # !&! !&$ !&' !&( 10 time (s) uncontrolled flow (p.u.) 12/57
  41. Preview complex 4-area power system: large (n=208), few sensors (8),

    nonlinear, noisy, sti , input constraints, & decentralized control specifications control objective: oscillation damping without a model (grid has many owners, models are proprietary, operation in flux, ...) !"#$ !"#% !"#& !"#' ()*+#$ ()*+#% !"#, !"#- !"#. !"#/ ()*+#& ()*+#' $ , % ' & 0 / - . $1 $$ $% $& $' $, $0 $- $. $/ %1 234*#$5, 234*#%5, 234*#,5- 234*#-5.5$ 234*#-5.5% 234*#.5/5$ 234*#.5/5% 234*#/50 234*#05& 234*#05' 234*#-5$1 234*#$%5%1 234*#/5$/ 234*#$$5$, 234*#$%5$, 234*#$,5$- 234*#$-5$.5$ 234*#$-5$.5% 234*#$.5$/5$ 234*#$.5$/5% 234*#$/5$0 234*#$05$& 234*#$05$' 6!758697 !:+:3;4#$ 6!758697 !:+:3;4#% 7;4:);<#!3=4+<> 7;4:);<#!3=4+<> !?>:*@ A+):3:3;434= 2;+B#$ 2;+B#% 2;+B#& 2;+B#' control control ! " #! # !&! !&$ !&' !&( 10 time (s) uncontrolled flow (p.u.) 12/57
  42. Preview complex 4-area power system: large (n=208), few sensors (8),

    nonlinear, noisy, sti , input constraints, & decentralized control specifications control objective: oscillation damping without a model (grid has many owners, models are proprietary, operation in flux, ...) !"#$ !"#% !"#& !"#' ()*+#$ ()*+#% !"#, !"#- !"#. !"#/ ()*+#& ()*+#' $ , % ' & 0 / - . $1 $$ $% $& $' $, $0 $- $. $/ %1 234*#$5, 234*#%5, 234*#,5- 234*#-5.5$ 234*#-5.5% 234*#.5/5$ 234*#.5/5% 234*#/50 234*#05& 234*#05' 234*#-5$1 234*#$%5%1 234*#/5$/ 234*#$$5$, 234*#$%5$, 234*#$,5$- 234*#$-5$.5$ 234*#$-5$.5% 234*#$.5$/5$ 234*#$.5$/5% 234*#$/5$0 234*#$05$& 234*#$05$' 6!758697 !:+:3;4#$ 6!758697 !:+:3;4#% 7;4:);<#!3=4+<> 7;4:);<#!3=4+<> !?>:*@ A+):3:3;434= 2;+B#$ 2;+B#% 2;+B#& 2;+B#' control control ! " #! # !&! !&$ !&' !&( 10 time (s) uncontrolled flow (p.u.) collect data control tie line flow (p.u.) !"#$%&'( ! " #! #" $! $" %! !&! !&$ !&' !&( 12/57
  43. Preview complex 4-area power system: large (n=208), few sensors (8),

    nonlinear, noisy, sti , input constraints, & decentralized control specifications control objective: oscillation damping without a model (grid has many owners, models are proprietary, operation in flux, ...) !"#$ !"#% !"#& !"#' ()*+#$ ()*+#% !"#, !"#- !"#. !"#/ ()*+#& ()*+#' $ , % ' & 0 / - . $1 $$ $% $& $' $, $0 $- $. $/ %1 234*#$5, 234*#%5, 234*#,5- 234*#-5.5$ 234*#-5.5% 234*#.5/5$ 234*#.5/5% 234*#/50 234*#05& 234*#05' 234*#-5$1 234*#$%5%1 234*#/5$/ 234*#$$5$, 234*#$%5$, 234*#$,5$- 234*#$-5$.5$ 234*#$-5$.5% 234*#$.5$/5$ 234*#$.5$/5% 234*#$/5$0 234*#$05$& 234*#$05$' 6!758697 !:+:3;4#$ 6!758697 !:+:3;4#% 7;4:);<#!3=4+<> 7;4:);<#!3=4+<> !?>:*@ A+):3:3;434= 2;+B#$ 2;+B#% 2;+B#& 2;+B#' control control ! " #! # !&! !&$ !&' !&( 10 time (s) uncontrolled flow (p.u.) collect data control tie line flow (p.u.) !"#$%&'( ! " #! #" $! $" %! !&! !&$ !&' !&( seek a method that works reliably, can be e ciently implemented, & certifiable ! automating ourselves 12/57
  44. Reality check: black magic or hoax ? surely, nobody would

    put apply such a shaky data-driven method • on the world’s most complex engineered system (the electric grid), • using the world’s biggest actuators (Gigawatt-sized HVDC links), • and subject to real-time, safety, stability, constraints ...right? 13/57
  45. Reality check: black magic or hoax ? surely, nobody would

    put apply such a shaky data-driven method • on the world’s most complex engineered system (the electric grid), • using the world’s biggest actuators (Gigawatt-sized HVDC links), • and subject to real-time, safety, stability, constraints ...right? !"#$%&'()'(%#(*%+,-$'#(. % /% 0123% 21)4'5"*% #% 6"$7% 8#6-1$#),"% $"6'"9% -8% 7-1$% :#:"$% ;<=% 9>'?>% /% )",'"6"% ?-1,*% )"% -8% '4:-$3#(?"%3-%-1$%9-$@%#3%A'3#?>'%B-9"$%C$'*2D%E"%*-%>#6"%%;<=%$"F1'$"%-GH,'("%31('(I%3>#3%%;<=% % ?-4'22'2'-('(I%"(I'(""$%?#(%*-%-(%>'2%-9(%%;<=%%#(%#*#:J6"%#::$-#?>%9-1,*%)"%6"$7%'(3"$"2J(ID % /8%:-22'),"%/%9-1,*%,'@"%3-%3$7%3>"%*"?"(3$#,'K"*%!""BL%#::$-#?>%9'3>%-1$%4-$"%*"3#',"*%AM!L% 2723"4% 4-*",2% -(% 3>"% '(3"$#$"#% -2?',,#J-(% :$-),"4D% L-1,*% 2-% 2-4"% ?-*"% )"% 4#*"% #6#',#),"% % ;<=%%N%E-1,*%7-1%)"%'(3"$"23"*%'(%9-$@'(I%3-I"3>"$%3-%*-%21?>%#%*"4-(23$#J-(%N%%;<= 13/57
  46. Reality check: black magic or hoax ? surely, nobody would

    put apply such a shaky data-driven method • on the world’s most complex engineered system (the electric grid), • using the world’s biggest actuators (Gigawatt-sized HVDC links), • and subject to real-time, safety, stability, constraints ...right? !"#$%&'()'(%#(*%+,-$'#(. % /% 0123% 21)4'5"*% #% 6"$7% 8#6-1$#),"% $"6'"9% -8% 7-1$% :#:"$% ;<=% 9>'?>% /% )",'"6"% ?-1,*% )"% -8% '4:-$3#(?"%3-%-1$%9-$@%#3%A'3#?>'%B-9"$%C$'*2D%E"%*-%>#6"%%;<=%$"F1'$"%-GH,'("%31('(I%3>#3%%;<=% % ?-4'22'2'-('(I%"(I'(""$%?#(%*-%-(%>'2%-9(%%;<=%%#(%#*#:J6"%#::$-#?>%9-1,*%)"%6"$7%'(3"$"2J(ID % /8%:-22'),"%/%9-1,*%,'@"%3-%3$7%3>"%*"?"(3$#,'K"*%!""BL%#::$-#?>%9'3>%-1$%4-$"%*"3#',"*%AM!L% 2723"4% 4-*",2% -(% 3>"% '(3"$#$"#% -2?',,#J-(% :$-),"4D% L-1,*% 2-% 2-4"% ?-*"% )"% 4#*"% #6#',#),"% % ;<=%%N%E-1,*%7-1%)"%'(3"$"23"*%'(%9-$@'(I%3-I"3>"$%3-%*-%21?>%#%*"4-(23$#J-(%N%%;<= %/3%9-$@2O%<%"6"(% -(%#(%"(J$",7% *'G"$"(3%4-*",%P% 2-Q9#$"%:,#R-$4 <%8"9%*#72%#Q"$% 2"(*'(I%-1$%?-*" 13/57
  47. Reality check: black magic or hoax ? surely, nobody would

    put apply such a shaky data-driven method • on the world’s most complex engineered system (the electric grid), • using the world’s biggest actuators (Gigawatt-sized HVDC links), • and subject to real-time, safety, stability, constraints ...right? !"#$%&'()'(%#(*%+,-$'#(. % /% 0123% 21)4'5"*% #% 6"$7% 8#6-1$#),"% $"6'"9% -8% 7-1$% :#:"$% ;<=% 9>'?>% /% )",'"6"% ?-1,*% )"% -8% '4:-$3#(?"%3-%-1$%9-$@%#3%A'3#?>'%B-9"$%C$'*2D%E"%*-%>#6"%%;<=%$"F1'$"%-GH,'("%31('(I%3>#3%%;<=% % ?-4'22'2'-('(I%"(I'(""$%?#(%*-%-(%>'2%-9(%%;<=%%#(%#*#:J6"%#::$-#?>%9-1,*%)"%6"$7%'(3"$"2J(ID % /8%:-22'),"%/%9-1,*%,'@"%3-%3$7%3>"%*"?"(3$#,'K"*%!""BL%#::$-#?>%9'3>%-1$%4-$"%*"3#',"*%AM!L% 2723"4% 4-*",2% -(% 3>"% '(3"$#$"#% -2?',,#J-(% :$-),"4D% L-1,*% 2-% 2-4"% ?-*"% )"% 4#*"% #6#',#),"% % ;<=%%N%E-1,*%7-1%)"%'(3"$"23"*%'(%9-$@'(I%3-I"3>"$%3-%*-%21?>%#%*"4-(23$#J-(%N%%;<= %/3%9-$@2O%<%"6"(% -(%#(%"(J$",7% *'G"$"(3%4-*",%P% 2-Q9#$"%:,#R-$4 <%8"9%*#72%#Q"$% 2"(*'(I%-1$%?-*" at least someone believes that our method is practically useful ... 13/57
  48. LTI system representations 15/57 exogenous ARX : y(t+ 2) +

    3y(+ + 1) + 2y(t) = u(t) um auto-regressive 1 transfer functions : G(z) = = + 3z + 2 state space : x = [e] x + = [ -3]]x + [ : ]a y : [1 03 u
  49. 16/57 kernel representation: · let t be the fine shift

    : 6 y(t) = y(t + 1) · Operator [62 + 6 + 2 , 17 [x + 36 + 2 , 1) . [3] = 0
  50. Behavioral view on dynamical systems Definition: A discrete-time dynamical system

    is a 3-tuple (Z 0, W, B) where (i) Z 0 is the discrete-time axis, (ii) W is the signal space, & (iii) B ✓ WZ 0 is the behavior. 9 > > = > > ; B is the set of all trajectories 17/57 beYet of all time serieas
  51. Behavioral view on dynamical systems Definition: A discrete-time dynamical system

    is a 3-tuple (Z 0, W, B) where (i) Z 0 is the discrete-time axis, (ii) W is the signal space, & (iii) B ✓ WZ 0 is the behavior. 9 > > = > > ; B is the set of all trajectories Definition: The dynamical system (Z 0, W, B) is (i) linear if W is a vector space & B is a subspace of WZ 0 (ii) & time-invariant if wt 2 B =) wt+1 2 B . 17/57
  52. Behavioral view on dynamical systems Definition: A discrete-time dynamical system

    is a 3-tuple (Z 0, W, B) where (i) Z 0 is the discrete-time axis, (ii) W is the signal space, & (iii) B ✓ WZ 0 is the behavior. 9 > > = > > ; B is the set of all trajectories Definition: The dynamical system (Z 0, W, B) is (i) linear if W is a vector space & B is a subspace of WZ 0 (ii) & time-invariant if wt 2 B =) wt+1 2 B . y u 17/57
  53. Behavioral view on dynamical systems Definition: A discrete-time dynamical system

    is a 3-tuple (Z 0, W, B) where (i) Z 0 is the discrete-time axis, (ii) W is the signal space, & (iii) B ✓ WZ 0 is the behavior. 9 > > = > > ; B is the set of all trajectories Definition: The dynamical system (Z 0, W, B) is (i) linear if W is a vector space & B is a subspace of WZ 0 (ii) & time-invariant if wt 2 B =) wt+1 2 B . LTI system = shift-invariant subspace of trajectory space ! abstract perspective suited for data-driven control y u 17/57
  54. Properties of the LTI trajectory space 18/57 x(0) = xo

    x+= A s 3 X(1) = Ax + Bu(0) x(2) = A3x + ABu(0) + 1(1(1) i y(t) = (A + x + At -T Bu(n) + Du(t) I I Ie ( x + as a I " : CAT- 3 B y(T) use m un Y Mendetit It : impulse matrix U
  55. 19/57 recall observability : can we reconstructxo from y =

    G x + J+ u yes If I has gatl rank - > the smallest T so that I has full rank is called the lag) of the system
  56. 20/57 space of all trajectories , behavior" , of length

    T : [] = ( )(i) m nafaeeramm has gate ranh m T for The lay) ↑ # inputs => dimension of the behavior is m . T + u for Tze
  57. LTI systems & matrix time series foundation of subspace system

    identification & signal recovery algorithms u(t) t u4 u2 u1 u3 u5 u6 u7 y(t) t y4 y2 y1 y3 y5 y6 y7 21/57
  58. LTI systems & matrix time series foundation of subspace system

    identification & signal recovery algorithms u(t) t u4 u2 u1 u3 u5 u6 u7 y(t) t y4 y2 y1 y3 y5 y6 y7 u(t), y(t) satisfy LTI di erence equation b0ut+b1ut+1+. . .+bnut+n+ a0yt+a1yt+1+. . .+anyt+n = 0 (ARX / kernel representation) 21/57 LTI
  59. LTI systems & matrix time series foundation of subspace system

    identification & signal recovery algorithms u(t) t u4 u2 u1 u3 u5 u6 u7 y(t) t y4 y2 y1 y3 y5 y6 y7 u(t), y(t) satisfy LTI di erence equation b0ut+b1ut+1+. . .+bnut+n+ a0yt+a1yt+1+. . .+anyt+n = 0 (ARX / kernel representation) ) [ 0 b0 a0 b1 a1 ... bn an 0 ] in left nullspace of trajectory matrix (collected data) H ⇣ ud yd ⌘ = 2 6 6 6 6 6 6 6 4 ud 1,1 yd 1,1 ! ud 1,2 yd 1,2 ! ud 1,3 yd 1,3 ! ... ud 2,1 yd 2,1 ! ud 2,2 yd 2,2 ! ud 2,3 yd 2,3 ! ... . . . . . . . . . . . . ud T,1 yd T,1 ! ud T,2 yd T,2 ! ud T,3 yd T,3 ! ... 3 7 7 7 7 7 7 7 5 | {z } 1st experiment | {z } 2nd | {z } 3rd ... 21/57
  60. LTI systems & matrix time series foundation of subspace system

    identification & signal recovery algorithms u(t) t u4 u2 u1 u3 u5 u6 u7 y(t) t y4 y2 y1 y3 y5 y6 y7 u(t), y(t) satisfy LTI di erence equation b0ut+b1ut+1+. . .+bnut+n+ a0yt+a1yt+1+. . .+anyt+n = 0 (ARX / kernel representation) ( under assumptions ) [ 0 b0 a0 b1 a1 ... bn an 0 ] in left nullspace of trajectory matrix (collected data) H ⇣ ud yd ⌘ = 2 6 6 6 6 6 6 6 4 ud 1,1 yd 1,1 ! ud 1,2 yd 1,2 ! ud 1,3 yd 1,3 ! ... ud 2,1 yd 2,1 ! ud 2,2 yd 2,2 ! ud 2,3 yd 2,3 ! ... . . . . . . . . . . . . ud T,1 yd T,1 ! ud T,2 yd T,2 ! ud T,3 yd T,3 ! ... 3 7 7 7 7 7 7 7 5 | {z } 1st experiment | {z } 2nd | {z } 3rd ... 21/57
  61. Fundamental Lemma u(t) t u4 u2 u1 u3 u5 u6

    u7 y(t) t y4 y2 y1 y3 y5 y6 y7 Given: data ⇣ ud i yd i ⌘ 2 Rm+p 22/57
  62. Fundamental Lemma u(t) t u4 u2 u1 u3 u5 u6

    u7 y(t) t y4 y2 y1 y3 y5 y6 y7 Given: data ⇣ ud i yd i ⌘ 2 Rm+p & LTI complexity parameters ⇢ lag ` order n 22/57
  63. Fundamental Lemma u(t) t u4 u2 u1 u3 u5 u6

    u7 y(t) t y4 y2 y1 y3 y5 y6 y7 Given: data ⇣ ud i yd i ⌘ 2 Rm+p & LTI complexity parameters ⇢ lag ` order n set of all T-length trajectories = n (u, y) 2 R(m+p)T : 9x 2 RnT s.t. x+ = Ax + Bu , y = Cx + Du o | {z } parametric state-space model 22/57
  64. Fundamental Lemma u(t) t u4 u2 u1 u3 u5 u6

    u7 y(t) t y4 y2 y1 y3 y5 y6 y7 Given: data ⇣ ud i yd i ⌘ 2 Rm+p & LTI complexity parameters ⇢ lag ` order n set of all T-length trajectories = n (u, y) 2 R(m+p)T : 9x 2 RnT s.t. x+ = Ax + Bu , y = Cx + Du o | {z } | {z } parametric state-space model raw data (every column is an experiment) colspan 2 6 6 6 6 6 6 6 6 4 ud 1,1 yd 1,1 ! ud 1,2 yd 1,2 ! ud 1,3 yd 1,3 ! ... ud 2,1 yd 2,1 ! ud 2,2 yd 2,2 ! ud 2,3 yd 2,3 ! ... . . . . . . . . . . . . ud T,1 yd T,1 ! ud T,2 yd T,2 ! ud T,3 yd T,3 ! ... 3 7 7 7 7 7 7 7 7 5 22/57
  65. Fundamental Lemma u(t) t u4 u2 u1 u3 u5 u6

    u7 y(t) t y4 y2 y1 y3 y5 y6 y7 Given: data ⇣ ud i yd i ⌘ 2 Rm+p & LTI complexity parameters ⇢ lag ` order n set of all T-length trajectories = n (u, y) 2 R(m+p)T : 9x 2 RnT s.t. x+ = Ax + Bu , y = Cx + Du o | {z } | {z } parametric state-space model raw data (every column is an experiment) colspan 2 6 6 6 6 6 6 6 6 4 ud 1,1 yd 1,1 ! ud 1,2 yd 1,2 ! ud 1,3 yd 1,3 ! ... ud 2,1 yd 2,1 ! ud 2,2 yd 2,2 ! ud 2,3 yd 2,3 ! ... . . . . . . . . . . . . ud T,1 yd T,1 ! ud T,2 yd T,2 ! ud T,3 yd T,3 ! ... 3 7 7 7 7 7 7 7 7 5 if and only if the trajectory matrix has rank m · T + n for all T > ` 22/57 /
  66. set of all T-length trajectories = n (u, y) 2

    R(m+p)T : 9x 2 RnT s.t. x+ = Ax + Bu , y = Cx + Du o | {z } | {z } parametric state-space model non-parametric model from raw data colspan 2 6 6 6 6 6 6 6 6 4 ud 1,1 yd 1,1 ! ud 1,2 yd 1,2 ! ud 1,3 yd 1,3 ! ... ud 2,1 yd 2,1 ! ud 2,2 yd 2,2 ! ud 2,3 yd 2,3 ! ... . . . . . . . . . . . . ud T,1 yd T,1 ! ud T,2 yd T,2 ! ud T,3 yd T,3 ! ... 3 7 7 7 7 7 7 7 7 5 23/57
  67. set of all T-length trajectories = n (u, y) 2

    R(m+p)T : 9x 2 RnT s.t. x+ = Ax + Bu , y = Cx + Du o | {z } | {z } parametric state-space model non-parametric model from raw data colspan 2 6 6 6 6 6 6 6 6 4 ud 1,1 yd 1,1 ! ud 1,2 yd 1,2 ! ud 1,3 yd 1,3 ! ... ud 2,1 yd 2,1 ! ud 2,2 yd 2,2 ! ud 2,3 yd 2,3 ! ... . . . . . . . . . . . . ud T,1 yd T,1 ! ud T,2 yd T,2 ! ud T,3 yd T,3 ! ... 3 7 7 7 7 7 7 7 7 5 all trajectories constructible from finitely many previous trajectories 23/57
  68. set of all T-length trajectories = n (u, y) 2

    R(m+p)T : 9x 2 RnT s.t. x+ = Ax + Bu , y = Cx + Du o | {z } | {z } parametric state-space model non-parametric model from raw data colspan 2 6 6 6 6 6 6 6 6 4 ud 1,1 yd 1,1 ! ud 1,2 yd 1,2 ! ud 1,3 yd 1,3 ! ... ud 2,1 yd 2,1 ! ud 2,2 yd 2,2 ! ud 2,3 yd 2,3 ! ... . . . . . . . . . . . . ud T,1 yd T,1 ! ud T,2 yd T,2 ! ud T,3 yd T,3 ! ... 3 7 7 7 7 7 7 7 7 5 all trajectories constructible from finitely many previous trajectories • standing on the shoulders of giants: classic Willems’ result was only “if” & required further assumptions: Hankel, persistency of excitation, controllability 23/57
  69. set of all T-length trajectories = n (u, y) 2

    R(m+p)T : 9x 2 RnT s.t. x+ = Ax + Bu , y = Cx + Du o | {z } | {z } parametric state-space model non-parametric model from raw data colspan 2 6 6 6 6 6 6 6 6 4 ud 1,1 yd 1,1 ! ud 1,2 yd 1,2 ! ud 1,3 yd 1,3 ! ... ud 2,1 yd 2,1 ! ud 2,2 yd 2,2 ! ud 2,3 yd 2,3 ! ... . . . . . . . . . . . . ud T,1 yd T,1 ! ud T,2 yd T,2 ! ud T,3 yd T,3 ! ... 3 7 7 7 7 7 7 7 7 5 all trajectories constructible from finitely many previous trajectories • standing on the shoulders of giants: classic Willems’ result was only “if” & required further assumptions: Hankel, persistency of excitation, controllability • terminology fundamental is justified : motion primitives, subspace SysID, dictionary learning, (E)DMD, ... all implicitly rely on this equivalence 23/57
  70. set of all T-length trajectories = n (u, y) 2

    R(m+p)T : 9x 2 RnT s.t. x+ = Ax + Bu , y = Cx + Du o | {z } | {z } parametric state-space model non-parametric model from raw data colspan 2 6 6 6 6 6 6 6 6 4 ud 1,1 yd 1,1 ! ud 1,2 yd 1,2 ! ud 1,3 yd 1,3 ! ... ud 2,1 yd 2,1 ! ud 2,2 yd 2,2 ! ud 2,3 yd 2,3 ! ... . . . . . . . . . . . . ud T,1 yd T,1 ! ud T,2 yd T,2 ! ud T,3 yd T,3 ! ... 3 7 7 7 7 7 7 7 7 5 all trajectories constructible from finitely many previous trajectories • standing on the shoulders of giants: classic Willems’ result was only “if” & required further assumptions: Hankel, persistency of excitation, controllability • terminology fundamental is justified : motion primitives, subspace SysID, dictionary learning, (E)DMD, ... all implicitly rely on this equivalence • many recent extensions to other system classes (bi-linear, descriptor, LPV, delay, Volterra series, Wiener-Hammerstein, ...), other matrix data structures (mosaic Hankel, Page, ...), & other proof methods 23/57
  71. Input design for Fundamental Lemma u(t) t u4 u2 u1

    u3 u5 u6 u7 y(t) t y4 y2 y1 y3 y5 y6 y7 24/57
  72. Input design for Fundamental Lemma u(t) t u4 u2 u1

    u3 u5 u6 u7 y(t) t y4 y2 y1 y3 y5 y6 y7 Definition: The data signal ud 2 RmTd of length Td is persistently exciting of order T if the Hankel matrix 2 4 u1 ··· uTd T +1 . . . ... . . . uT ··· uTd 3 5 is of full rank. 24/57 un m
  73. Input design for Fundamental Lemma u(t) t u4 u2 u1

    u3 u5 u6 u7 y(t) t y4 y2 y1 y3 y5 y6 y7 Definition: The data signal ud 2 RmTd of length Td is persistently exciting of order T if the Hankel matrix 2 4 u1 ··· uTd T +1 . . . ... . . . uT ··· uTd 3 5 is of full rank. 24/57 mT u" has to be sufficiently rich & long : To-T + = mT
  74. Input design for Fundamental Lemma u(t) t u4 u2 u1

    u3 u5 u6 u7 y(t) t y4 y2 y1 y3 y5 y6 y7 Definition: The data signal ud 2 RmTd of length Td is persistently exciting of order T if the Hankel matrix 2 4 u1 ··· uTd T +1 . . . ... . . . uT ··· uTd 3 5 is of full rank. Input design [Willems et al, ’05]: Controllable LTI system & persistently exciting input ud of order T + n =) rank ⇣ H ⇣ ud yd ⌘⌘ = mT + n. 24/57
  75. Data matrix structures & preprocessing 25/57 · trajectory matrix H(w()

    requires mana short independent experiments a page matrix H(w) = [ W We requires one long experiment · Hankel matrix H(v) = fr Wh . requires one short experiment but this matrix structured o any combination n... " block Hankel
  76. 26/57 · de-noising = rank-reduction since rank (H(w)) = mT

    + n (low-rank · heuristic is SVD-tresholding : · take the SVD of Hlvo) · remove all but the largest mT + n : singular values never worked
  77. Bird’s view & today’s sample path through the accelerating literature

    Fundamental Lemma [Willems, Rapisarda, & Markovsky ’05] subspace intersection methods [Moonen et al., ’89] PE in linear systems [Green & Moore, ’86] many recent variations & extensions [van Waarde et al., ’20] generalized low- rank version [Markovsky & Dörfler, ’20] deterministic data-driven control [Markovsky & Rapisarda, ’08] data-driven control of linear systems [Persis & Tesi, ’19] regularizations & MPC scenario [Coulson et al., ’19] data informativity [van Waarde et al., ’20] LFT formulation [Berberich et al., ’20] … ? explicit implicit non-control applications: e.g., estimation. filtering, & SysID stabilization of nonlinear systems [Persis & Tesi, ’21] … robust stability & recursive feasibility [Berberich et al., ’20] (distributional) robustness [Coulson et al., ’20, Huang et al., ’21] regularizer from relaxed SysID [Dörfler et al., ’21] … … … subspace predictive control [Favoreel et al., ’99] subspace methods [Breschi, Chiuso, & Formention ’22] instrumental variables [Wingerden et al., ’22] 1980s 2005 today 27/57
  78. Output Model Predictive Control (MPC) minimize u, x, y Tfuture

    X k=1 kyk rk k2 Q + kuk k2 R subject to xk+1 = Axk + Buk yk = Cxk + Duk 8k 2 {1, . . . , Tfuture } uk 2 U yk 2 Y 8k 2 {1, . . . , Tfuture } quadratic cost with R 0, Q ⌫ 0 & ref. r model for prediction with k 2 [1, Tfuture] hard operational or safety constraints 28/57
  79. Output Model Predictive Control (MPC) minimize u, x, y Tfuture

    X k=1 kyk rk k2 Q + kuk k2 R subject to xk+1 = Axk + Buk yk = Cxk + Duk 8k 2 {1, . . . , Tfuture } xk+1 = Axk + Buk yk = Cxk + Duk 8k 2 { Tini 1, . . . , 0} uk 2 U yk 2 Y 8k 2 {1, . . . , Tfuture } quadratic cost with R 0, Q ⌫ 0 & ref. r model for prediction with k 2 [1, Tfuture] model for estimation with k 2 [ Tini 1, 0] & Tini lag (many flavors) hard operational or safety constraints 28/57
  80. Output Model Predictive Control (MPC) minimize u, x, y Tfuture

    X k=1 kyk rk k2 Q + kuk k2 R subject to xk+1 = Axk + Buk yk = Cxk + Duk 8k 2 {1, . . . , Tfuture } xk+1 = Axk + Buk yk = Cxk + Duk 8k 2 { Tini 1, . . . , 0} uk 2 U yk 2 Y 8k 2 {1, . . . , Tfuture } quadratic cost with R 0, Q ⌫ 0 & ref. r model for prediction with k 2 [1, Tfuture] model for estimation with k 2 [ Tini 1, 0] & Tini lag (many flavors) hard operational or safety constraints “[MPC] has perhaps too little system theory and too much brute force – Willems ’07 28/57
  81. Output Model Predictive Control (MPC) minimize u, x, y Tfuture

    X k=1 kyk rk k2 Q + kuk k2 R subject to xk+1 = Axk + Buk yk = Cxk + Duk 8k 2 {1, . . . , Tfuture } xk+1 = Axk + Buk yk = Cxk + Duk 8k 2 { Tini 1, . . . , 0} uk 2 U yk 2 Y 8k 2 {1, . . . , Tfuture } quadratic cost with R 0, Q ⌫ 0 & ref. r model for prediction with k 2 [1, Tfuture] model for estimation with k 2 [ Tini 1, 0] & Tini lag (many flavors) hard operational or safety constraints “[MPC] has perhaps too little system theory and too much brute force [...], but MPC is an area where all aspects of the field [...] are in synergy.” – Willems ’07 28/57
  82. Output Model Predictive Control (MPC) minimize u, x, y Tfuture

    X k=1 kyk rk k2 Q + kuk k2 R subject to xk+1 = Axk + Buk yk = Cxk + Duk 8k 2 {1, . . . , Tfuture } xk+1 = Axk + Buk yk = Cxk + Duk 8k 2 { Tini 1, . . . , 0} uk 2 U yk 2 Y 8k 2 {1, . . . , Tfuture } quadratic cost with R 0, Q ⌫ 0 & ref. r model for prediction with k 2 [1, Tfuture] model for estimation with k 2 [ Tini 1, 0] & Tini lag (many flavors) hard operational or safety constraints “[MPC] has perhaps too little system theory and too much brute force [...], but MPC is an area where all aspects of the field [...] are in synergy.” – Willems ’07 Elegance aside, for an LTI plant, deterministic, & with known model, MPC is the gold standard of control. 28/57
  83. Data-enabled Predictive Control (DeePC) minimize g, u, y Tfuture X

    k=1 kyk rk k2 Q + kuk k2 R subject to H ⇣ ud yd ⌘ · g = 2 6 6 4 uini yini u y 3 7 7 5 uk 2 U yk 2 Y 8k 2 {1, . . . , Tfuture } quadratic cost with R 0, Q ⌫ 0 & ref. r non-parametric model for prediction and estimation hard operational or safety constraints • real-time measurements (uini, yini) for estimation • trajectory matrix H ⇣ ud yd ⌘ from past experimental data updated online collected o ine (could be adapted online) 29/57 & online ?
  84. Data-enabled Predictive Control (DeePC) minimize g, u, y Tfuture X

    k=1 kyk rk k2 Q + kuk k2 R subject to H ⇣ ud yd ⌘ · g = 2 6 6 4 uini yini u y 3 7 7 5 uk 2 U yk 2 Y 8k 2 {1, . . . , Tfuture } quadratic cost with R 0, Q ⌫ 0 & ref. r non-parametric model for prediction and estimation hard operational or safety constraints • real-time measurements (uini, yini) for estimation • trajectory matrix H ⇣ ud yd ⌘ from past experimental data updated online collected o ine (could be adapted online) ! equivalent to MPC in deterministic LTI case ... but needs to be robustified in case of noise / nonlinearity ! 29/57
  85. ...& that’s all of our shallow idea Data-Enabled Predictive Control:

    In the Shallows of the DeePC Jeremy Coulson John Lygeros Florian D¨ orfler Abstract— We consider the problem of optimal trajectory tracking for unknown systems. A novel data-enabled predictive control (DeePC) algorithm is presented that computes optimal and safe control policies using real-time feedback driving the unknown system along a desired trajectory while satisfying system constraints. Using a finite number of data samples from the unknown system, our proposed algorithm uses a behavioural systems theory approach to learn a non-parametric system model used to predict future trajectories. The DeePC algorithm is shown to be equivalent to the classical and widely adopted Model Predictive Control (MPC) algorithm in the case of deterministic linear time-invariant systems. In the case of nonlinear stochastic systems, we propose regularizations to the DeePC algorithm. Simulations are provided to illustrate performance and compare the algorithm with other methods. approaches usually require a large number of data sam- ples to perform well, and are often sensitive to hyper- parameters leading to non-reproducible and highly variable outcomes [4]. Other approaches propose performing sequential system identification (ID) and control. System ID can be used to produce an approximate model and provide finite sample guarantees quantifying model uncertainty, allowing for robust control design [5]. In this spirit, an end-to-end ID and control pipeline is given in [6] and arrives at a data-driven control solution with guarantees on the sample efficiency, stability, performance, and robustness. The system identification step in these approaches disregards one of the main advantages of a data-driven approach: independence from an underlying DeePC = deep sea (making fun of ML & data science) … though the idea is really shallow 30/57
  86. ...& that’s all of our shallow idea Data-Enabled Predictive Control:

    In the Shallows of the DeePC Jeremy Coulson John Lygeros Florian D¨ orfler Abstract— We consider the problem of optimal trajectory tracking for unknown systems. A novel data-enabled predictive control (DeePC) algorithm is presented that computes optimal and safe control policies using real-time feedback driving the unknown system along a desired trajectory while satisfying system constraints. Using a finite number of data samples from the unknown system, our proposed algorithm uses a behavioural systems theory approach to learn a non-parametric system model used to predict future trajectories. The DeePC algorithm is shown to be equivalent to the classical and widely adopted Model Predictive Control (MPC) algorithm in the case of deterministic linear time-invariant systems. In the case of nonlinear stochastic systems, we propose regularizations to the DeePC algorithm. Simulations are provided to illustrate performance and compare the algorithm with other methods. approaches usually require a large number of data sam- ples to perform well, and are often sensitive to hyper- parameters leading to non-reproducible and highly variable outcomes [4]. Other approaches propose performing sequential system identification (ID) and control. System ID can be used to produce an approximate model and provide finite sample guarantees quantifying model uncertainty, allowing for robust control design [5]. In this spirit, an end-to-end ID and control pipeline is given in [6] and arrives at a data-driven control solution with guarantees on the sample efficiency, stability, performance, and robustness. The system identification step in these approaches disregards one of the main advantages of a data-driven approach: independence from an underlying DeePC = deep sea (making fun of ML & data science) … though the idea is really shallow surprise: scheme has zero robustness ! need deeper thinking 30/57
  87. Regularizations to make it work minimize g, u, y Tfuture

    X k=1 kyk rk k2 Q + kuk k2 R subject to H ⇣ ud yd ⌘ · g = 2 6 6 4 uini yini u y 3 7 7 5 uk 2 U yk 2 Y 8k 2 {1, . . . , Tfuture } 31/57
  88. Regularizations to make it work minimize g, u, y, Tfuture

    X k=1 kyk rk k2 Q + kuk k2 R + y k kp subject to H ⇣ ud yd ⌘ · g = 2 6 6 4 uini yini u y 3 7 7 5 + 2 6 6 4 0 0 0 3 7 7 5 uk 2 U yk 2 Y 8k 2 {1, . . . , Tfuture } measurement noise ! infeasible yini estimate ! estimation slack ! moving-horizon least-square filter 31/57
  89. Regularizations to make it work minimize g, u, y, Tfuture

    X k=1 kyk rk k2 Q + kuk k2 R + y k kp + gh(g) subject to H ⇣ ud yd ⌘ · g = 2 6 6 4 uini yini u y 3 7 7 5 + 2 6 6 4 0 0 0 3 7 7 5 uk 2 U yk 2 Y 8k 2 {1, . . . , Tfuture } measurement noise ! infeasible yini estimate ! estimation slack ! moving-horizon least-square filter noisy or nonlinear (o ine) data matrix ! any (u y) feasible ! add regularizer h(g) 31/57 S in practice : neuer low runk
  90. Regularizations to make it work minimize g, u, y, Tfuture

    X k=1 kyk rk k2 Q + kuk k2 R + y k kp + gh(g) subject to H ⇣ ud yd ⌘ · g = 2 6 6 4 uini yini u y 3 7 7 5 + 2 6 6 4 0 0 0 3 7 7 5 uk 2 U yk 2 Y 8k 2 {1, . . . , Tfuture } measurement noise ! infeasible yini estimate ! estimation slack ! moving-horizon least-square filter noisy or nonlinear (o ine) data matrix ! any (u y) feasible ! add regularizer h(g) Bayesian intuition: regularization , prior, e.g., h(g) = kgk1 sparsely selects {trajectory matrix columns} = {motion primitives} ⇠ low-order basis 31/57 D L e singular values *
  91. Regularizations to make it work minimize g, u, y, Tfuture

    X k=1 kyk rk k2 Q + kuk k2 R + y k kp + gh(g) subject to H ⇣ ud yd ⌘ · g = 2 6 6 4 uini yini u y 3 7 7 5 + 2 6 6 4 0 0 0 3 7 7 5 uk 2 U yk 2 Y 8k 2 {1, . . . , Tfuture } measurement noise ! infeasible yini estimate ! estimation slack ! moving-horizon least-square filter noisy or nonlinear (o ine) data matrix ! any (u y) feasible ! add regularizer h(g) Bayesian intuition: regularization , prior, e.g., h(g) = kgk1 sparsely selects {trajectory matrix columns} = {motion primitives} ⇠ low-order basis Robustness intuition: regularization , robustifies, e.g., in a simple case 31/57
  92. Regularizations to make it work minimize g, u, y, Tfuture

    X k=1 kyk rk k2 Q + kuk k2 R + y k kp + gh(g) subject to H ⇣ ud yd ⌘ · g = 2 6 6 4 uini yini u y 3 7 7 5 + 2 6 6 4 0 0 0 3 7 7 5 uk 2 U yk 2 Y 8k 2 {1, . . . , Tfuture } measurement noise ! infeasible yini estimate ! estimation slack ! moving-horizon least-square filter noisy or nonlinear (o ine) data matrix ! any (u y) feasible ! add regularizer h(g) Bayesian intuition: regularization , prior, e.g., h(g) = kgk1 sparsely selects {trajectory matrix columns} = {motion primitives} ⇠ low-order basis Robustness intuition: regularization , robustifies, e.g., in a simple case 31/57 light minmuxl+ x)x-bmin max llAx-b + x = mix llx-b x IIA/1I9
  93. Regularization = relaxing low-rank approximation in pre-processing minimizeu,y,g control cost

    u, y subject to  u y = H ⇣ ˆ u ˆ y ⌘ g where ⇣ ˆ u ˆ y ⌘ 2 argmin ⇣ ˆ u ˆ y ⌘ ⇣ ud yd ⌘ subject to rank H ˆ u ˆ y = mL + n 9 = ; optimal control 9 = ; low-rank approximation 32/57
  94. Regularization = relaxing low-rank approximation in pre-processing minimizeu,y,g control cost

    u, y subject to  u y = H ⇣ ˆ u ˆ y ⌘ g where ⇣ ˆ u ˆ y ⌘ 2 argmin ⇣ ˆ u ˆ y ⌘ ⇣ ud yd ⌘ subject to rank H ˆ u ˆ y = mL + n # sequence of convex relaxations # minimizeu,y,g control cost u, y subject to  u y = H ⇣ ˆ u ˆ y ⌘ g where ⇣ ˆ u ˆ y ⌘ 2 argmin ⇣ ˆ u ˆ y ⌘ ⇣ ud yd ⌘ subject to rank H ˆ u ˆ y = mL + n , kgk0  mL + n 9 = ; optimal control 9 = ; low-rank approximation 32/57 low-rank - ML + n "horizon -sparse
  95. Regularization = relaxing low-rank approximation in pre-processing minimizeu,y,g control cost

    u, y subject to  u y = H ⇣ ˆ u ˆ y ⌘ g where ⇣ ˆ u ˆ y ⌘ 2 argmin ⇣ ˆ u ˆ y ⌘ ⇣ ud yd ⌘ subject to rank H ˆ u ˆ y = mL + n # sequence of convex relaxations # minimizeu,y,g control cost u, y subject to  u y = H ⇣ ˆ u ˆ y ⌘ g where ⇣ ˆ u ˆ y ⌘ 2 argmin ⇣ ˆ u ˆ y ⌘ ⇣ ud yd ⌘ subject to rank H ˆ u ˆ y = mL + n , kgk0  mL + n 9 = ; optimal control 9 = ; low-rank approximation 32/57
  96. Regularization = relaxing low-rank approximation in pre-processing minimizeu,y,g control cost

    u, y subject to  u y = H ⇣ ˆ u ˆ y ⌘ g where ⇣ ˆ u ˆ y ⌘ 2 argmin ⇣ ˆ u ˆ y ⌘ ⇣ ud yd ⌘ subject to rank H ˆ u ˆ y = mL + n # sequence of convex relaxations # minimizeu,y,g control cost u, y subject to  u y = H ⇣ ˆ u ˆ y ⌘ g where ⇣ ˆ u ˆ y ⌘ 2 argmin ⇣ ˆ u ˆ y ⌘ ⇣ ud yd ⌘ subject to rank H ˆ u ˆ y = mL + n , kgk0  mL + n = ⇣ ud yd ⌘ 9 = ; optimal control 9 = ; low-rank approximation 32/57
  97. Regularization = relaxing low-rank approximation in pre-processing minimizeu,y,g control cost

    u, y subject to  u y = H ⇣ ˆ u ˆ y ⌘ g where ⇣ ˆ u ˆ y ⌘ 2 argmin ⇣ ˆ u ˆ y ⌘ ⇣ ud yd ⌘ subject to rank H ˆ u ˆ y = mL + n # sequence of convex relaxations # minimizeu,y,g control cost u, y subject to  u y = H ⇣ ud yd ⌘ g , kgk0  mL + n 9 = ; optimal control 9 = ; low-rank approximation 32/57
  98. Regularization = relaxing low-rank approximation in pre-processing minimizeu,y,g control cost

    u, y subject to  u y = H ⇣ ˆ u ˆ y ⌘ g where ⇣ ˆ u ˆ y ⌘ 2 argmin ⇣ ˆ u ˆ y ⌘ ⇣ ud yd ⌘ subject to rank H ˆ u ˆ y = mL + n # sequence of convex relaxations # minimizeu,y,g control cost u, y subject to  u y = H ⇣ ud yd ⌘ g , kgk1  mL + n 9 = ; optimal control 9 = ; low-rank approximation 32/57
  99. Regularization = relaxing low-rank approximation in pre-processing minimizeu,y,g control cost

    u, y subject to  u y = H ⇣ ˆ u ˆ y ⌘ g where ⇣ ˆ u ˆ y ⌘ 2 argmin ⇣ ˆ u ˆ y ⌘ ⇣ ud yd ⌘ subject to rank H ˆ u ˆ y = mL + n # sequence of convex relaxations # minimizeu,y,g control cost u, y + g · kgk1 subject to  u y = H ⇣ ud yd ⌘ g 9 = ; optimal control 9 = ; low-rank approximation 32/57
  100. Regularization = relaxing low-rank approximation in pre-processing minimizeu,y,g control cost

    u, y subject to  u y = H ⇣ ˆ u ˆ y ⌘ g where ⇣ ˆ u ˆ y ⌘ 2 argmin ⇣ ˆ u ˆ y ⌘ ⇣ ud yd ⌘ subject to rank H ˆ u ˆ y = mL + n # sequence of convex relaxations # minimizeu,y,g control cost u, y + g · kgk1 subject to  u y = H ⇣ ud yd ⌘ g `1 -regularization = relaxation of low-rank approximation & smoothened order selection 9 = ; optimal control 9 = ; low-rank approximation 32/57
  101. Regularization = relaxing low-rank approximation in pre-processing minimizeu,y,g control cost

    u, y subject to  u y = H ⇣ ˆ u ˆ y ⌘ g where ⇣ ˆ u ˆ y ⌘ 2 argmin ⇣ ˆ u ˆ y ⌘ ⇣ ud yd ⌘ subject to rank H ˆ u ˆ y = mL + n # sequence of convex relaxations # minimizeu,y,g control cost u, y + g · kgk1 subject to  u y = H ⇣ ud yd ⌘ g `1 -regularization = relaxation of low-rank approximation & smoothened order selection 9 = ; optimal control 9 = ; low-rank approximation !"#$%&'"##()*#$+ realized closed-loop cost g 32/57 "
  102. Certainty-Equivalence Regularizer 33/57 Model-based Control Deep y = 1 .

    [ii] [i] = Hig = ( where K is learnt via u y = Yeg wheresolves K = urymin 114-i . [ - k = Ye . [Y - 32.% Cl - > to recover the model-based solution we need to penalize grom
  103. Regularization , reformulate subspace ID ! indirect SysID + control

    problem minimizeu,y control cost(u, y) subject to y = K? 2 4 uini yini u 3 5 34/57
  104. Regularization , reformulate subspace ID partition data as in subspace

    ID: H ⇣ ud yd ⌘ ⇠ 2 6 6 4 Up Yp Uf Yf 3 7 7 5 (m + p)Tini (m + p)Tfuture ! indirect SysID + control problem minimizeu,y control cost(u, y) subject to y = K? 2 4 uini yini u 3 5 34/57
  105. Regularization , reformulate subspace ID partition data as in subspace

    ID: H ⇣ ud yd ⌘ ⇠ 2 6 6 4 Up Yp Uf Yf 3 7 7 5 (m + p)Tini (m + p)Tfuture ! indirect SysID + control problem minimizeu,y control cost(u, y) subject to y = K? 2 4 uini yini u 3 5 where K? = argmin K YF K 2 4 Up Yp Uf 3 5 34/57
  106. Regularization , reformulate subspace ID partition data as in subspace

    ID: H ⇣ ud yd ⌘ ⇠ 2 6 6 4 Up Yp Uf Yf 3 7 7 5 (m + p)Tini (m + p)Tfuture ID of optimal multi-step predictor as in SPC: K? = YF  Up Yp Uf † 8 < : ! indirect SysID + control problem minimizeu,y control cost(u, y) subject to y = K? 2 4 uini yini u 3 5 where K? = argmin K YF K 2 4 Up Yp Uf 3 5 34/57 subspace predictor y = y[ : ] (ii)
  107. Regularization , reformulate subspace ID partition data as in subspace

    ID: H ⇣ ud yd ⌘ ⇠ 2 6 6 4 Up Yp Uf Yf 3 7 7 5 (m + p)Tini (m + p)Tfuture ID of optimal multi-step predictor as in SPC: K? = YF  Up Yp Uf † 8 < : ! indirect SysID + control problem minimizeu,y control cost(u, y) subject to y = K? 2 4 uini yini u 3 5 where K? = argmin K YF K 2 4 Up Yp Uf 3 5 The above is equivalent to regularized DeePC minimizeg,u,y control cost(u, y) + g Proj ⇣ ud yd ⌘ g p subject to H ⇣ ud yd ⌘ · g = 2 6 6 4 uini yini u y 3 7 7 5 34/57
  108. Regularization , reformulate subspace ID partition data as in subspace

    ID: H ⇣ ud yd ⌘ ⇠ 2 6 6 4 Up Yp Uf Yf 3 7 7 5 (m + p)Tini (m + p)Tfuture ID of optimal multi-step predictor as in SPC: K? = YF  Up Yp Uf † 8 < : ! indirect SysID + control problem minimizeu,y control cost(u, y) subject to y = K? 2 4 uini yini u 3 5 where K? = argmin K YF K 2 4 Up Yp Uf 3 5 The above is equivalent to regularized DeePC where Proj ⇣ ud yd ⌘ projects orthogonal to ker  Up Yp Uf minimizeg,u,y control cost(u, y) + g Proj ⇣ ud yd ⌘ g p subject to H ⇣ ud yd ⌘ · g = 2 6 6 4 uini yini u y 3 7 7 5 34/57
  109. Performance of regularizers applied to a stochastic LTI system kgkp

    Proj ⇣ ud yd ⌘ g p Hanke-Raus heuristic (often) reveals 35/57
  110. Further details worked out (for another day) • DeePC with

    closed-loop data leveraging a transient predictor – single-step – causal – unbiased – with optimal regularizers • subspace tracking for LTV systems • extensions to nonlinear system class: conic/convex/a ne behaviors, LPV, RHKS formulations, or ... • robust problem formulations to robustify objectives & constraints • behavioral uncertainty quantification via subspace angles online system identificat Andr´ as Sasfi1, Alberto Padoan1, Ivan Markovsky2, Florian D¨ orfle 1 Automatic Control Laboratory, ETH Z¨ urich, Switzerland, 2 CIMN Motivation dsfsdgsdafgvsdfvsdfvsdzv Data-driven methods based on behavioral systems theory are powerful, yet difficult to extend to time-varying or nonlinear systems Subspaces as system models U1 U2 Behavioral systems theory: linear time-varying behaviors are time-varying subspaces Illustration: LTV state-space equations xt+1 = At xt + Bt ut , yt = Ct xt + Dt ut Th As W 1.W 2.P Co su 36/57
  111. Case study: wind turbine • detailed industrial model: 37 states

    & highly nonlinear (abc $ dq, MPTT, PLL, power specs, dynamics, etc.) • turbine & grid model unknown to commissioning engineer & operator • weak grid + PLL + fault ! loss of sync • disturbance to be rejected by DeePC 37/57 BM
  112. Case study: wind turbine • detailed industrial model: 37 states

    & highly nonlinear (abc $ dq, MPTT, PLL, power specs, dynamics, etc.) • turbine & grid model unknown to commissioning engineer & operator • weak grid + PLL + fault ! loss of sync • disturbance to be rejected by DeePC !"#" $%&&'$#(%) *(#+%,#-"!!(#(%)"&-$%)#.%& %/$(&&"#(%) %0/'.1'! h(g) = kgk2 2 h(g) = kgk1 h(g) = Proj ⇣ ud yd ⌘ g 2 2 2''34-"$#(1"#'! 2''34-"$#(1"#'! 37/57
  113. Case study: wind turbine • detailed industrial model: 37 states

    & highly nonlinear (abc $ dq, MPTT, PLL, power specs, dynamics, etc.) • turbine & grid model unknown to commissioning engineer & operator • weak grid + PLL + fault ! loss of sync • disturbance to be rejected by DeePC !"#" $%&&'$#(%) *(#+%,#-"!!(#(%)"&-$%)#.%& %/$(&&"#(%) %0/'.1'! h(g) = kgk2 2 h(g) = kgk1 h(g) = Proj ⇣ ud yd ⌘ g 2 2 2''34-"$#(1"#'! 2''34-"$#(1"#'! regularizer tuning h(g) = kgk2 2 h(g) = kgk1 h(g) = Proj ⇣ ud yd ⌘ g 2 2 Hanke-Raus heuristic 37/57
  114. Case study ++ : wind farm SG 1 SG 2

    SG 3 1 2 3 4 5 6 7 9 8 IEEE nine-bus system wind farm 1 2 3 4 5 6 7 8 9 10 • high-fidelity models for turbines, machines, & IEEE-9-bus system • fast frequency response via decentralized DeePC at turbines 38/57
  115. Case study ++ : wind farm SG 1 SG 2

    SG 3 1 2 3 4 5 6 7 9 8 IEEE nine-bus system wind farm 1 2 3 4 5 6 7 8 9 10 • high-fidelity models for turbines, machines, & IEEE-9-bus system • fast frequency response via decentralized DeePC at turbines h(g) = Proj ⇣ ud yd ⌘ g 2 2 subspace ID + control 38/57
  116. Case study ++ : wind farm SG 1 SG 2

    SG 3 1 2 3 4 5 6 7 9 8 IEEE nine-bus system wind farm 1 2 3 4 5 6 7 8 9 10 • high-fidelity models for turbines, machines, & IEEE-9-bus system • fast frequency response via decentralized DeePC at turbines h(g) = Proj ⇣ ud yd ⌘ g 2 2 subspace ID + control 38/57
  117. DeePC is easy to implement ! try it ! !

    simple script adapted from our ETH Z¨ urich bachelor course on Computational control : https://colab.research.google.com/ drive/1URdRqr-Up0A6uDMjlU6gwmsoAAPl1GId?usp=sharing 39/57
  118. Towards a theory for nonlinear systems idea : lift nonlinear

    system to large/1-dimensional bi-/linear system ! Carleman, Volterra, Fliess, Koopman, Sturm-Liouville methods ! nonlinear dynamics can be approximated by LTI on finite horizon 40/57
  119. Towards a theory for nonlinear systems idea : lift nonlinear

    system to large/1-dimensional bi-/linear system ! Carleman, Volterra, Fliess, Koopman, Sturm-Liouville methods ! nonlinear dynamics can be approximated by LTI on finite horizon regularization singles out relevant features / basis functions in data 40/57
  120. Towards a theory for nonlinear systems idea : lift nonlinear

    system to large/1-dimensional bi-/linear system ! Carleman, Volterra, Fliess, Koopman, Sturm-Liouville methods ! nonlinear dynamics can be approximated by LTI on finite horizon regularization singles out relevant features / basis functions in data https://www.research-collection.ethz.ch/handle/20.500.11850/493419 40/57
  121. Distributional robustification beyond LTI • problem abstraction : minx2X c

    b ⇠, x = minx2X E ⇠⇠b P ⇥ c (⇠, x) ⇤ where b ⇠ denotes measured data with empirical distribution b P = b ⇠ 42/57 ↑ ↑ ↑ sample distribution Sege Sug Seg
  122. Distributional robustification beyond LTI • problem abstraction : minx2X c

    b ⇠, x = minx2X E ⇠⇠b P ⇥ c (⇠, x) ⇤ where b ⇠ denotes measured data with empirical distribution b P = b ⇠ ) poor out-of-sample performance of above sample-average solution x? for real problem: E⇠⇠P ⇥ c (⇠, x?) ⇤ where P is the unknown distribution of ⇠ 42/57
  123. Distributional robustification beyond LTI • problem abstraction : minx2X c

    b ⇠, x = minx2X E ⇠⇠b P ⇥ c (⇠, x) ⇤ where b ⇠ denotes measured data with empirical distribution b P = b ⇠ ) poor out-of-sample performance of above sample-average solution x? for real problem: E⇠⇠P ⇥ c (⇠, x?) ⇤ where P is the unknown distribution of ⇠ • distributionally robust formulation accounting for all (possibly nonlinear) stochastic processes that could have generated the data 42/57
  124. Distributional robustification beyond LTI • problem abstraction : minx2X c

    b ⇠, x = minx2X E ⇠⇠b P ⇥ c (⇠, x) ⇤ where b ⇠ denotes measured data with empirical distribution b P = b ⇠ ) poor out-of-sample performance of above sample-average solution x? for real problem: E⇠⇠P ⇥ c (⇠, x?) ⇤ where P is the unknown distribution of ⇠ • distributionally robust formulation accounting for all (possibly nonlinear) stochastic processes that could have generated the data inf x2X sup Q2B✏(b P) E⇠⇠Q ⇥ c (⇠, x) ⇤ 42/57 - maximize over all distributions Q which are -close to the samples ↑ design
  125. Distributional robustification beyond LTI • problem abstraction : minx2X c

    b ⇠, x = minx2X E ⇠⇠b P ⇥ c (⇠, x) ⇤ where b ⇠ denotes measured data with empirical distribution b P = b ⇠ ) poor out-of-sample performance of above sample-average solution x? for real problem: E⇠⇠P ⇥ c (⇠, x?) ⇤ where P is the unknown distribution of ⇠ • distributionally robust formulation accounting for all (possibly nonlinear) stochastic processes that could have generated the data inf x2X sup Q2B✏(b P) E⇠⇠Q ⇥ c (⇠, x) ⇤ where B✏(b P) is an ✏-Wasserstein ball centered at empirical sample distribution b P : B✏(b P) = ⇢ P : inf ⇧ Z ⇠ b ⇠ p d⇧  ✏ ˆ ⇠ ⇠ ˆ P P ⇧ 42/57
  126. • distributionally robustness ⌘ regularization : under minor conditions Theorem:

    inf x2X sup Q2B✏(b P) E⇠⇠Q ⇥ c (⇠, x) ⇤ | {z } distributional robust formulation 43/57
  127. • distributionally robustness ⌘ regularization : under minor conditions Theorem:

    inf x2X sup Q2B✏(b P) E⇠⇠Q ⇥ c (⇠, x) ⇤ | {z } distributional robust formulation ⌘ min x2X c ⇣ b ⇠, x ⌘ + ✏ Lip(c) · kxk? p | {z } previous regularized DeePC formulation 43/57 ↑ dual norm 1 - 0 3 - 1 2 - 2
  128. • distributionally robustness ⌘ regularization : under minor conditions Theorem:

    inf x2X sup Q2B✏(b P) E⇠⇠Q ⇥ c (⇠, x) ⇤ | {z } distributional robust formulation ⌘ min x2X c ⇣ b ⇠, x ⌘ + ✏ Lip(c) · kxk? p | {z } previous regularized DeePC formulation Cor : `1 -robustness in trajectory space () `1 -regularization of DeePC 43/57
  129. • distributionally robustness ⌘ regularization : under minor conditions Theorem:

    inf x2X sup Q2B✏(b P) E⇠⇠Q ⇥ c (⇠, x) ⇤ | {z } distributional robust formulation ⌘ min x2X c ⇣ b ⇠, x ⌘ + ✏ Lip(c) · kxk? p | {z } previous regularized DeePC formulation Cor : `1 -robustness in trajectory space () `1 -regularization of DeePC 10-5 10-4 10-3 10-2 10-1 100 0 0.5 1 1.5 2 2.5 3 3.5 Cost 105 cost ✏ 43/57
  130. • distributionally robustness ⌘ regularization : under minor conditions Theorem:

    inf x2X sup Q2B✏(b P) E⇠⇠Q ⇥ c (⇠, x) ⇤ | {z } distributional robust formulation ⌘ min x2X c ⇣ b ⇠, x ⌘ + ✏ Lip(c) · kxk? p | {z } previous regularized DeePC formulation Cor : `1 -robustness in trajectory space () `1 -regularization of DeePC 10-5 10-4 10-3 10-2 10-1 100 0 0.5 1 1.5 2 2.5 3 3.5 Cost 105 cost ✏ • measure concentration: average matrix 1 N PN i=1 Hi(yd) from i.i.d. experiments =) ambiguity set B✏(b P) includes true P with high confidence if ✏ ⇠ 1/N1/ dim(⇠) N = 1 N = 10 43/57
  131. Further ingredients • more structured uncertainty sets : tractable reformulations

    (relaxations) & performance guarantees • distributionally robust probabilistic constraints sup Q2B✏(b P) CVaRQ 1 ↵ () averaging + regularization + tightening CVaRP 1 ↵ (X) P(X)  1 ↵ VaRP 1 ↵ (X) 44/57
  132. Further ingredients • more structured uncertainty sets : tractable reformulations

    (relaxations) & performance guarantees • distributionally robust probabilistic constraints sup Q2B✏(b P) CVaRQ 1 ↵ () averaging + regularization + tightening CVaRP 1 ↵ (X) P(X)  1 ↵ VaRP 1 ↵ (X) • replace (finite) moving horizon estimation via (uini yini ) by recursive Kalman filtering based on optimization solution g? as hidden state ... 44/57
  133. white elephant: how does DeePC perform against SysID + control

    ? surprise: DeePC consistently beats (certainty-equivalence) identification & control of LTI models across all real case studies !
  134. white elephant: how does DeePC perform against SysID + control

    ? surprise: DeePC consistently beats (certainty-equivalence) identification & control of LTI models across all real case studies ! why ?!?
  135. Comparison: direct vs. indirect control indirect ID-based data-driven control minimize

    control cost u, y subject to u, y satisfy parametric model where model 2 argmin id cost ud, yd subject to model 2 LTI(n, `) class 45/57
  136. Comparison: direct vs. indirect control indirect ID-based data-driven control minimize

    control cost u, y subject to u, y satisfy parametric model where model 2 argmin id cost ud, yd subject to model 2 LTI(n, `) class ) ID 45/57
  137. Comparison: direct vs. indirect control indirect ID-based data-driven control minimize

    control cost u, y subject to u, y satisfy parametric model where model 2 argmin id cost ud, yd subject to model 2 LTI(n, `) class ) ID ID projects data on the set of LTI models • with parameters (n, `) • removes noise & thus lowers variance error • su ers bias error if plant is not LTI(n, `) 45/57
  138. Comparison: direct vs. indirect control indirect ID-based data-driven control minimize

    control cost u, y subject to u, y satisfy parametric model where model 2 argmin id cost ud, yd subject to model 2 LTI(n, `) class ) ID ID projects data on the set of LTI models • with parameters (n, `) • removes noise & thus lowers variance error • su ers bias error if plant is not LTI(n, `) direct regularized data-driven control minimize control cost u, y + · regularizer subject to u, y consistent with ud, yd data 45/57 [i] E image Hagel
  139. Comparison: direct vs. indirect control indirect ID-based data-driven control minimize

    control cost u, y subject to u, y satisfy parametric model where model 2 argmin id cost ud, yd subject to model 2 LTI(n, `) class ) ID ID projects data on the set of LTI models • with parameters (n, `) • removes noise & thus lowers variance error • su ers bias error if plant is not LTI(n, `) direct regularized data-driven control minimize control cost u, y + · regularizer subject to u, y consistent with ud, yd data • regularization robustifies ! choosing makes it work • no projection on LTI(n, `) ! no de-noising & no bias 45/57
  140. Comparison: direct vs. indirect control indirect ID-based data-driven control minimize

    control cost u, y subject to u, y satisfy parametric model where model 2 argmin id cost ud, yd subject to model 2 LTI(n, `) class ) ID ID projects data on the set of LTI models • with parameters (n, `) • removes noise & thus lowers variance error • su ers bias error if plant is not LTI(n, `) direct regularized data-driven control minimize control cost u, y + · regularizer subject to u, y consistent with ud, yd data • regularization robustifies ! choosing makes it work • no projection on LTI(n, `) ! no de-noising & no bias hypothesis: ID wins in stochastic (variance) & DeePC in nonlinear (bias) case 45/57 Istochastic LTI of her on complexity
  141. Case study: direct vs. indirect control stochastic LTI case •

    LQR control of 5th order LTI system • Gaussian noise with varying noise to signal ratio (100 rollouts each case) • `1 -regularized DeePC, SysID via N4SID, & judicious hyper-parameters 46/57 of known order
  142. Case study: direct vs. indirect control stochastic LTI case •

    LQR control of 5th order LTI system • Gaussian noise with varying noise to signal ratio (100 rollouts each case) • `1 -regularized DeePC, SysID via N4SID, & judicious hyper-parameters deterministic noisy 46/57
  143. Case study: direct vs. indirect control stochastic LTI case !

    indirect ID wins • LQR control of 5th order LTI system • Gaussian noise with varying noise to signal ratio (100 rollouts each case) • `1 -regularized DeePC, SysID via N4SID, & judicious hyper-parameters deterministic noisy 46/57
  144. Case study: direct vs. indirect control stochastic LTI case !

    indirect ID wins • LQR control of 5th order LTI system • Gaussian noise with varying noise to signal ratio (100 rollouts each case) • `1 -regularized DeePC, SysID via N4SID, & judicious hyper-parameters deterministic noisy nonlinear case • Lotka-Volterra + control: x+ = f(x, u) • interpolated system x+ = ✏·flinearized(x, u)+(1 ✏)·f(x, u) • same ID & DeePC as on the left & 100 initial x0 rollouts for each ✏ 46/57
  145. Case study: direct vs. indirect control stochastic LTI case !

    indirect ID wins • LQR control of 5th order LTI system • Gaussian noise with varying noise to signal ratio (100 rollouts each case) • `1 -regularized DeePC, SysID via N4SID, & judicious hyper-parameters deterministic noisy nonlinear case • Lotka-Volterra + control: x+ = f(x, u) • interpolated system x+ = ✏·flinearized(x, u)+(1 ✏)·f(x, u) • same ID & DeePC as on the left & 100 initial x0 rollouts for each ✏ nonlinear linear 46/57
  146. Case study: direct vs. indirect control stochastic LTI case !

    indirect ID wins • LQR control of 5th order LTI system • Gaussian noise with varying noise to signal ratio (100 rollouts each case) • `1 -regularized DeePC, SysID via N4SID, & judicious hyper-parameters deterministic noisy nonlinear case ! direct DeePC wins • Lotka-Volterra + control: x+ = f(x, u) • interpolated system x+ = ✏·flinearized(x, u)+(1 ✏)·f(x, u) • same ID & DeePC as on the left & 100 initial x0 rollouts for each ✏ nonlinear linear 46/57
  147. Power system case study revisited !"#$ !"#% !"#& !"#' ()*+#$

    ()*+#% !"#, !"#- !"#. !"#/ ()*+#& ()*+#' $ , % ' & 0 / - . $1 $$ $% $& $' $, $0 $- $. $/ %1 234*#$5, 234*#%5, 234*#,5- 234*#-5.5$ 234*#-5.5% 234*#.5/5$ 234*#.5/5% 234*#/50 234*#05& 234*#05' 234*#-5$1 234*#$%5%1 234*#/5$/ 234*#$$5$, 234*#$%5$, 234*#$,5$- 234*#$-5$.5$ 234*#$-5$.5% 234*#$.5$/5$ 234*#$.5$/5% 234*#$/5$0 234*#$05$& 234*#$05$' 6!758697 !:+:3;4#$ 6!758697 !:+:3;4#% 7;4:);<#!3=4+<> 7;4:);<#!3=4+<> !" !" # # # !" !" # # !" #$% !" ?@+>*52;AB*C#2;;D 7E))*4:#7;4:);<#2;;D 97#6;<:+=*#7;4:);<#2;;D 6;<:+=*#7;4:);<#2;;D !" #$% 7;4:);<#93+=)+F#;G#6!758697#!:+:3;4#% !" !" # # # !" !" # # !" #$% !" ?@+>*52;AB*C#2;;D 7E))*4:#7;4:);<#2;;D ?;H*)#7;4:);<#2;;D 6;<:+=*#7;4:);<#2;;D !" #$% 7;4:);<#93+=)+F#;G#6!758697#!:+:3;4#$ !I>:*F ?+):3:3;434= 2;+C#$ 2;+C#% 2;+C#& 2;+C#' control control ! " #! #" !&! !&$ !&' !&( 10 time (s) uncontrolled flow (p.u.) • complex 4-area power system: large (n = 208), few measurements (8), nonlinear, noisy, sti , input constraints, & decentralized control • control objective: damping of inter-area oscillations via HVDC link • real-time MPC & DeePC prohibitive ! choose T, Tini , & Tfuture wisely 47/57
  148. Centralized control 0 5 10 15 20 25 30 0.2

    0.4 0.6 0.8 0 5 10 15 20 25 30 0.0 0.2 0.4 0.6 0 5 10 15 20 25 30 0.0 0.2 0.4 0.6 time (s) a control; —– with PEM-MPC (s = 60); —– with DeePC Closed‐loop cost Number of simulations DeePC PEM‐MPC ost comparison of DeePC and PEM-MPC under the practic = Prediction Error Method (PEM) System ID + MPC t < 10 s : open loop data collection with white noise excitat. t > 10 s : control 48/57
  149. Performance: DeePC wins (clearly!) Closed‐loop cost Number of simulations DeePC

    PEM‐MPC Measured closed-loop cost = P k kyk rk k2 Q + kuk k2 R 49/57
  150. DeePC hyper-parameter tuning Closed‐loop cost Closed‐loop cost Tfuture regularizer g

    • for distributional robustness ⇡ radius of Wasserstein ball • wide range of sweet spots ! choose g = 20 estimation horizon Tini • for model complexity ⇡ lag • Tini 50 is su cient & low computational complexity ! choose Tini = 60 50/57 1 En
  151. Closed‐loop cost Closed‐loop cost Closed‐loop cost Closed‐loop cost Tfuture prediction

    horizon Tfuture • nominal MPC is stable if horizon Tfuture long enough ! choose Tfuture = 120 & apply first 60 input steps data length T • long enough for low-rank condition but card(g) grows ! choose T = 1500 (data matrix ⇡ square) 51/57 as far as computation well conditioned
  152. Computational cost time (s) 0 5 10 15 20 25

    30 0.2 0.4 0.6 0.8 0 5 10 15 20 25 30 0.0 0.2 0.4 0.6 0 5 10 15 20 25 30 0.0 0.2 0.4 0.6 • T = 1500 • g = 20 • Tini = 60 • Tfuture = 120 & apply first 60 input steps • sampling time = 0.02 s • solver (OSQP) time = 1 s (on Intel Core i5 7200U) ) implementable 52/57
  153. Comparison: Hankel & Page matrix Control Horizon k Control Horizon

    k Averaged Closed‐loop Cost S0=1 ⌅ Hankel matrix ⌅ Hankel matrix with SVD ( threshhold = 1) ⌅ Page matrix ⌅ Page matrix with SVD ( threshhold = 1) • comparison baseline: Hankel and Page matrices of same size • perfomance : Page consistency beats Hankel matrix predictors • o ine denoising via SVD threshholding works wonderfully for Page though obviously not for Hankel (entries are constrained) • e ects very pronounced for longer horizon (= open-loop time) • price-to-be-paid : Page matrix predictor requires more data 53/57
  154. Decentralized implementation !"#$ !"#% !"#& !"#' ()*+#$ ()*+#% !"#, !"#-

    !"#. !"#/ ()*+#& ()*+#' $ , % ' & 0 / - . $1 $$ $% $& $' $, $0 $- $. $/ %1 234*#$5, 234*#%5, 234*#,5- 234*#-5.5$ 234*#-5.5% 234*#.5/5$ 234*#.5/5% 234*#/50 234*#05& 234*#05' 234*#-5$1 234*#$%5%1 234*#/5$/ 234*#$$5$, 234*#$%5$, 234*#$,5$- 234*#$-5$.5$ 234*#$-5$.5% 234*#$.5$/5$ 234*#$.5$/5% 234*#$/5$0 234*#$05$& 234*#$05$' 6!758697 !:+:3;4#$ 6!758697 !:+:3;4#% 7;4:);<#!3=4+<> 7;4:);<#!3=4+<> !" !" # # # !" !" # # !" #$% !" ?@+>*52;AB*C#2;;D 7E))*4:#7;4:);<#2;;D 97#6;<:+=*#7;4:);<#2;;D 6;<:+=*#7;4:);<#2;;D !" #$% 7;4:);<#93+=)+F#;G#6!758697#!:+:3;4#% !" !" # # # !" !" # # !" #$% !" ?@+>*52;AB*C#2;;D 7E))*4:#7;4:);<#2;;D ?;H*)#7;4:);<#2;;D 6;<:+=*#7;4:);<#2;;D !" #$% 7;4:);<#93+=)+F#;G#6!758697#!:+:3;4#$ !I>:*F ?+):3:3;434= 2;+C#$ 2;+C#% 2;+C#& 2;+C#' control control ! " #! #" $! $ !&! !&$ !&' !&( 10 time (s) uncontrolled flow (p.u.) • plug’n’play MPC: treat interconnection P3 as disturbance variable w with past disturbance wini measurable & future wfuture 2 W uncertain • for each controller augment trajectory matrix with disturbance data w • decentralized robust min-max DeePC: ming,u,y maxw2W 54/57
  155. Decentralized control performance 0 5 10 15 20 25 30

    0.2 0.4 0.6 0.8 0 5 10 15 20 25 30 0.0 0.2 0.4 0.6 0 5 10 15 20 25 30 0.0 0.2 0.4 0.6 time (s) • colors correspond to di erent hyper- parameter settings (not discernible) • ambiguity set W is 1-ball (box) • for computational e ciency W is downsampled (piece-wise linear) • solver time ⇡ 2.6 s ) implementable 55/57
  156. Conclusions main take-aways • matrix time series as predictive model

    • robustness & side-info by regularization • method that works in theory & practice • focus is robust prediction not predictor ID SG 1 SG 2 SG 3 1 2 3 4 5 6 7 9 8 IEEE nine-bus system wind farm 1 2 3 4 5 6 7 8 9 10 56/57 ~ hilosophical shift
  157. Conclusions main take-aways • matrix time series as predictive model

    • robustness & side-info by regularization • method that works in theory & practice • focus is robust prediction not predictor ID ongoing work ! certificates for adaptive & nonlinear cases ! applications with a true “business case”, push TRL scale, & industry collaborations SG 1 SG 2 SG 3 1 2 3 4 5 6 7 9 8 IEEE nine-bus system wind farm 1 2 3 4 5 6 7 8 9 10 56/57 Lar (nuclear fusion
  158. Conclusions main take-aways • matrix time series as predictive model

    • robustness & side-info by regularization • method that works in theory & practice • focus is robust prediction not predictor ID ongoing work ! certificates for adaptive & nonlinear cases ! applications with a true “business case”, push TRL scale, & industry collaborations SG 1 SG 2 SG 3 1 2 3 4 5 6 7 9 8 IEEE nine-bus system wind farm 1 2 3 4 5 6 7 8 9 10 questions we should discuss • catch? violate no-free-lunch theorem ? ! more real-time computation 56/57 sparse MPC constraint / = dense Dee PC constraint
  159. Conclusions main take-aways • matrix time series as predictive model

    • robustness & side-info by regularization • method that works in theory & practice • focus is robust prediction not predictor ID ongoing work ! certificates for adaptive & nonlinear cases ! applications with a true “business case”, push TRL scale, & industry collaborations SG 1 SG 2 SG 3 1 2 3 4 5 6 7 9 8 IEEE nine-bus system wind farm 1 2 3 4 5 6 7 8 9 10 questions we should discuss • catch? violate no-free-lunch theorem ? ! more real-time computation • when does direct beat indirect ? ! Id4Control & bias/variance issues ? 56/57