diff --git a/docs/src/examples/invpend.md b/docs/src/examples/invpend.md deleted file mode 100644 index 9cb8d5fe..00000000 --- a/docs/src/examples/invpend.md +++ /dev/null @@ -1,33 +0,0 @@ -# [Example - Inverted Pendulum on a Cart](@id ex_invpned) -To exemplify the basics of **LinearMPC.jl**, we consider the classical example of controlling a inverted pendulum on a cart. The cart can be moved by applying a force $u$, to control the cart position $p$ -and pendulum angle $\theta$. The start of using MPC is to have a _model_ of the system we want to control, and the model which is used in an MPC controller is discrete-time state-space models of the form $x_{k+1} = F x_k G u_k$. In ref we discuss more how to use different types of models, but the end-point is always a discrete-time state-space system. For simplicity, we assume that the following state-space model is given for cart system - -```m̀ath -F = ,\quad G = -``` -where $p$ and $\dot{p}$ are the position/velocity of the cart, and $\theta$ and $\dot{theta}$ are the angle and angular velocity of the pendulum. - -Our goal is to control the cart's position $p$ and the angle of the pendulum $\theta$. We denote the signals we want to control with $y$, which in our case gives $y_1 = p$ and $y_2 = \theta$. How the states relates to our output $y$ can compactly be written as $y = Cx$, which for the cart system gives -```math -C = -``` -An additional requirement is that we don't want the control signal to be too "jittery". This is addressed with a penalty $\Delta u R_r \Delta u$, where $\Delta u$ denotes the change in the control input between to time steps. -The objective at a single time step, thus becomes $(Cx - y)^T Q (Cx-y) + \Delta u^T R \Delta u$. - - -In addition to the dynamics of the system, we also want to impose additional constraint when controlling the system. First, there is a limited amount of force $u$ that can be applied. Specifically, this gives us the following bounds on the control: $-2 \leq u \leq 2$. Moreover, we also want the angle $\theta$ to not be to large, since this would our model. Therefore, we also have the constraint $-0.2 \leq \theta \leq 0.2$. - - - - -```julia -F = -``` - -Taken together, we pose the control problem as a receding horizon control problem. - -Control horizon / Prediction horizon (hint at move blocking) - -Simulate the system - -## Generating C-code diff --git a/docs/src/manual/game.md b/docs/src/manual/game.md index d089e885..2503ec89 100644 --- a/docs/src/manual/game.md +++ b/docs/src/manual/game.md @@ -1,4 +1,4 @@ -# [Robust MPC](@id man_robust) +# [Game-Theoretic MPC](@id man_robust) **LinearMPC.jl** can also solve general Nash equilibria to game-theoretic linear MPC problems. Specifically it can handle objective of the form diff --git a/docs/src/manual/moveblock.md b/docs/src/manual/moveblock.md index 23a3ee78..fb40d379 100644 --- a/docs/src/manual/moveblock.md +++ b/docs/src/manual/moveblock.md @@ -28,20 +28,16 @@ Finally, it is possible to define different move blocks for different control si Consider the example of the control of an inverted pendulum on a cart, which is predefined as one of the examples in **LinearMPC.jl** and can be accessed with the call `mpc_examples`. We consider the case when we want the first output (the position of the cart) to reach a certain value (1 m to be specific.) Below we create three different MPC controllers with different prediction horizons and show the resulting step responses. -```julia +```@example move_block using LinearMPC,Plots tsolve, plt = zeros(3),plot(); for (k,Np) in enumerate([100,75,50]) mpc,_ = LinearMPC.mpc_examples("invpend",Np) - tsolve[k] = @elapsed sim = LinearMPC.Simulation(mpc;r=[1,0],N=500); + dynamics = (x,u,d) -> mpc.model.F*x + mpc.model.G*u # Since long horizon + tsolve[k] = @elapsed sim = LinearMPC.Simulation(dynamics,mpc;r=[1,0],N=500); plot!(plt, sim,yids=[1],uids=[], color = k, label="Np = "*string(Np)) end -plot!(plt,ylims=(-0.5,1.25)) -``` - -```@raw html -