diff --git a/docs/src/examples/invpend.md b/docs/src/examples/invpend.md deleted file mode 100644 index 9cb8d5fe..00000000 --- a/docs/src/examples/invpend.md +++ /dev/null @@ -1,33 +0,0 @@ -# [Example - Inverted Pendulum on a Cart](@id ex_invpned) -To exemplify the basics of **LinearMPC.jl**, we consider the classical example of controlling a inverted pendulum on a cart. The cart can be moved by applying a force $u$, to control the cart position $p$ -and pendulum angle $\theta$. The start of using MPC is to have a _model_ of the system we want to control, and the model which is used in an MPC controller is discrete-time state-space models of the form $x_{k+1} = F x_k G u_k$. In ref we discuss more how to use different types of models, but the end-point is always a discrete-time state-space system. For simplicity, we assume that the following state-space model is given for cart system - -```m̀ath -F = ,\quad G = -``` -where $p$ and $\dot{p}$ are the position/velocity of the cart, and $\theta$ and $\dot{theta}$ are the angle and angular velocity of the pendulum. - -Our goal is to control the cart's position $p$ and the angle of the pendulum $\theta$. We denote the signals we want to control with $y$, which in our case gives $y_1 = p$ and $y_2 = \theta$. How the states relates to our output $y$ can compactly be written as $y = Cx$, which for the cart system gives -```math -C = -``` -An additional requirement is that we don't want the control signal to be too "jittery". This is addressed with a penalty $\Delta u R_r \Delta u$, where $\Delta u$ denotes the change in the control input between to time steps. -The objective at a single time step, thus becomes $(Cx - y)^T Q (Cx-y) + \Delta u^T R \Delta u$. - - -In addition to the dynamics of the system, we also want to impose additional constraint when controlling the system. First, there is a limited amount of force $u$ that can be applied. Specifically, this gives us the following bounds on the control: $-2 \leq u \leq 2$. Moreover, we also want the angle $\theta$ to not be to large, since this would our model. Therefore, we also have the constraint $-0.2 \leq \theta \leq 0.2$. - - - - -```julia -F = -``` - -Taken together, we pose the control problem as a receding horizon control problem. - -Control horizon / Prediction horizon (hint at move blocking) - -Simulate the system - -## Generating C-code diff --git a/docs/src/manual/game.md b/docs/src/manual/game.md index d089e885..2503ec89 100644 --- a/docs/src/manual/game.md +++ b/docs/src/manual/game.md @@ -1,4 +1,4 @@ -# [Robust MPC](@id man_robust) +# [Game-Theoretic MPC](@id man_robust) **LinearMPC.jl** can also solve general Nash equilibria to game-theoretic linear MPC problems. Specifically it can handle objective of the form diff --git a/docs/src/manual/moveblock.md b/docs/src/manual/moveblock.md index 23a3ee78..fb40d379 100644 --- a/docs/src/manual/moveblock.md +++ b/docs/src/manual/moveblock.md @@ -28,20 +28,16 @@ Finally, it is possible to define different move blocks for different control si Consider the example of the control of an inverted pendulum on a cart, which is predefined as one of the examples in **LinearMPC.jl** and can be accessed with the call `mpc_examples`. We consider the case when we want the first output (the position of the cart) to reach a certain value (1 m to be specific.) Below we create three different MPC controllers with different prediction horizons and show the resulting step responses. -```julia +```@example move_block using LinearMPC,Plots tsolve, plt = zeros(3),plot(); for (k,Np) in enumerate([100,75,50]) mpc,_ = LinearMPC.mpc_examples("invpend",Np) - tsolve[k] = @elapsed sim = LinearMPC.Simulation(mpc;r=[1,0],N=500); + dynamics = (x,u,d) -> mpc.model.F*x + mpc.model.G*u # Since long horizon + tsolve[k] = @elapsed sim = LinearMPC.Simulation(dynamics,mpc;r=[1,0],N=500); plot!(plt, sim,yids=[1],uids=[], color = k, label="Np = "*string(Np)) end -plot!(plt,ylims=(-0.5,1.25)) -``` - -```@raw html -

simple_sim1

+plot!(plt,ylims=(-0.5,1.25), legend=true) ``` As can be seen, the higher the prediction horizon, the faster the setpoint is reached. (Note that for even lower values of the prediction horizon, the resulting closed-loop system becomes unstable.) The cost of the horizon can, however, be seen in the solve times, where the solve times are about four times slower for $N_p = 100$ compared with $N_p = 50$. @@ -49,23 +45,26 @@ As can be seen, the higher the prediction horizon, the faster the setpoint is re To reduce the computation time, we consider three different move blocks. We consider the case of no move blocks (which an empty vector of move blocks encodes,) of evenly spaced move blocks `[1,1,1,1,1]` and a more dynamics set of move blocks `[1,1,5,10,10]`. For all of the cases, we consider a prediction horizon of $N_p = 100$. The resulting step responses are shown below. -```julia -using LinearMPC,Plots +```@example move_block mpc,_ = LinearMPC.mpc_examples("invpend",100) +dynamics = (x,u,d) -> mpc.model.F*x + mpc.model.G*u # Since long horizon tsolve, plt = zeros(3),plot(); move_blocks = [Int[], [1,1,1,1,1], [1,1,5,10,10]] +solve_times = [] for (k,mb) in enumerate(move_blocks) move_block!(mpc,mb) - tsolve[k] = @elapsed sim = LinearMPC.Simulation(mpc;r=[1,0],N=500); + tsolve[k] = @elapsed sim = LinearMPC.Simulation(dynamics,mpc;r=[1,0],N=500); plot!(plt, sim,yids=[1],uids=[], color = k, label="move block = "*string(mb)) + push!(solve_times,sim.solve_times) end -plot!(plt,ylims=(-0.5,1.25)) +plot!(plt,ylims=(-0.5,1.25),legend=true) ``` -```@raw html -

simple_sim1

+We see that the adaptive move block almost performs as well as using no move blocks. The main difference is that when no move blocks are used, there are 100 decision variables, while for both the move blocks `[1,1,1,1,1]` and `[1,1,5,10,10]` there are only 5 decision variables. +This can be seen in the solution times, where both of the move blocks leads to solution times that are about 5-10 times faster. +```@example move_block +using Statistics +for (k,mb) in enumerate(move_blocks) + println("median solve time: $(round(median(solve_times[k]),sigdigits=3)) | move block: "*string(mb)) +end ``` - -As can be seen, the adaptive move block almost performs as well as using no move blocks. The main difference is that when no move blocks are used, there are 100 decision variables, while for both the move blocks `[1,1,1,1,1]` and `[1,1,5,10,10]` there are only 5 decision variables. -This can be seen in the solution times, where both of the move blocks leads to solution times that are about 6 times faster (note that the actual speedup is even higher due to some overhead from the simulation.) diff --git a/docs/src/manual/prestab.md b/docs/src/manual/prestab.md index ff081133..42c8e09c 100644 --- a/docs/src/manual/prestab.md +++ b/docs/src/manual/prestab.md @@ -21,6 +21,10 @@ A popular choice of $K$ is as the gain from solving an infinite horizon LQR prob ```julia set_prestabilizing_feedback!(mpc) ``` + +!!! note "Prestabilization + Move block" + If you use prestabilization and move blocking, only $v_k$ is held constant, not $u_k$. Hence, the closed-loop behaviour with/without prestabilization might differ. + ## Example Consider a first-order system with a pole in 10, which has the transfer function ```math