Using MPC controllers

As seen in the tutorials, to make basic use of \muaompc there’s no need to know too much about MPC theory. However, to make use of more advanced features, a better understanding of MPC internals is required. This section will start with some basic theory about MPC and continue with a detailed description of the several ways \muaompc can help solve MPC problems.

Basics of MPC

The MPC setup can be equivalently expressed as a parametric quadratic program (QP), a special type of optimization problem. Under certain conditions (which \muaompc ensures are always met), the QP is strictly convex. It basically means that the QP can be efficiently solved applying convex optimizaton theory.

Parametric QP

The QP depends on the current system state and on the reference trajectories, if any. We express our parametric QP in the following form:

\underset{\vb{u}}{\text{minimize}}
& \;\; \frac{1}{2} \vb{u}^T H \vb{u} + \vb{u}^T g(x, \vb{x}_{ref}, \vb{u}_{ref}) \\
\text{subject to}
& \;\; \underline{\vb{u}} \leq \vb{u} \leq \overline{\vb{u}} \\
& \;\; \underline{\vb{z}}(x) \leq E \vb{u} \leq \overline{\vb{z}}(x) \\

A MPC controller is based on the repetitive solution of a QP which is parameterized by the current state x, and the current state and input reference trajectories \vb{x}_{ref}, \vb{u}_{ref}, respectively. In other words, at every sampling time we find an approximate solution to a different QP. We emphasize the fact that the MPC solution \tilde{\vb{u}} is only a (rough) approximation of the exact solution \vb{u}^*. In some applications, even rough approximations may deliver acceptable controller performance. Exploiting this fact is of extreme importance for embedded applications, which have low computational resources.

There is another important property to note. Some optimization algorithms require an initial guess to the solution of the QP. Clearly, a good (bad) guess, i.e. close (far) to the solution, will deliver a good (bad) approximate solution (all other conditions being equal). This property is used for the controller’s advantage. There are basically two strategies on how good initial guesses can be computed. One is called cold-start strategy. This means that the initial guess is always the same for all QP problems. This strategy is mainly used when sudden changes on the state are expected (e.g. high frequency electrical applications). The other strategy is called warm-start. It means that the previous MPC approximate solution is used to compute the inital guess for the current QP. In applications where the state changes slowly with respect to the sampling frequency (e.g. most mechanical systems), two consecutive QPs arising from the MPC scheme have very similar solutions.

A last thing to note is that g(\cdot) is the only term that depends on the reference trajectories. For the special case of regulation to the origin (where both references are zero) we will simply write g(x).

Using the Python interface

In this section we describe the two main controller functions available to users of \muaompc. For easier explanation, we will first discuss the Python interface. Later we will discuss the equivalent, albeit slightly more complex, C functions. The Python interface can be used for prototyping and simulating MPC controllers. The C functions are the actual controller implementation.

Solving MPC problems

The most straightforward way to solve the MPC optimization problem is using \muaompc‘s default QP solver.

ctl.solve_problem(x)

Solve the MPC problem for the given state using the default solver.

Parameters:x (numpy array) – the current state. It must be of size states.

This method is an interface to the C code. See its documentation for further details.

This method relies on other fields of the ctl structure:

  1. conf configuration structure.
  2. x_ref state reference trajectory
  3. u_ref input reference trajectory
  4. u_ini initial guess for the optimization variable
  5. l_ini initial guess for the Lagrange multipliers
  6. u_opt approximate solution to the optimization problem
  7. l_opt approximate optimal Lagrange multiplier
  8. x_trj state trajectory under the current u_opt

conf contains the basic configuration parameters of the optimization algorithm. It consist of the fields:

  • warmstart: if True, use a warmstart strategy (default: False)
  • in_iter: number of internal iterations (default: 1)
  • ex_iter: number of external iterations. If mixed constraints are not present, it should be set to 1. (default: 1)

x_ref is an array of shape (hor_states, 1), whereas u_ref is an array of shape (hor_inputs, 1). By default, x_ref and u_ref are zero vectors of appropriate size. In other words, the default case is MPC regulation to the origin.

u_ini is an array of shape (hor_inputs, 1). l_ini is an array of shape (hor_mxconstrs, 1). l_ini is only of interest for problems with mixed constraints. By default, these are also zero. These mainly need to be set by the user in the case of manually coldstarting the MPC algorithm (conf.warmstart = False). If conf.warmstart = True, they are automatically computed based on the previous solution.

u_opt is an array of shape (hor_inputs, 1). l_opt is an array of shape (hor_mxconstrs, 1). l_opt is only of interest for problems with mixed constraints. x_trj is an array of shape (hor_states+states, 1).

Usually in an MPC scheme, only the first input vector of the optimal input sequence u_opt is of interest, i.e. the first inputs elements of u_opt.

For example, assuming mpc is an MPC object with only input constraints, and we want all states to be regulated to 2, starting at states all 3:

mpc.ctl.conf.in_iter = 5  # configuration done only once
mpc.ctl.conf.warmstart = True  # use warmstart

mpc.ctl.x_ref = numpy.ones(mpc.ctl.x_ref.shape) * 2

# repeat the following lines for every new state x 
x = numpy.ones(mpc.ctl.x_ref.shape) * 3
mpc.ctl.solve_problem(x)
u0 = mpc.ctl.u_opt[:mpc.size.inputs]  # 1st input vector in sequence

Using a different solver

Optionally, the user can use a different QP solver together with \muaompc. This can be used for example for prototyping MPC algorithms, or for finding exact solutions using standard QP solvers.

ctl.form_qp(x)

Compute the parametric quadratic program data using x as parameter.

Parameters:x (numpy array) – the current state. It must be of size states.

This method is an interface to the C code. See its documentation for further details.

This method relies on other fields of the ctl structure:

  1. x_ref state reference trajectory
  2. u_ref input reference trajectory
  3. qpx the structure with the created quadratic program data

x_ref is an array of shape (hor_states, 1), whereas u_ref is an array of shape (hor_inputs, 1). By default, x_ref and u_ref are zero vectors of appropriate size. In other words, the default case is MPC regulation to the origin.

qpx contains the computed data of the Parametric QP using the given state and references. It consist of the fields:

  • HoL Hessian matrix
  • gxoL gradient vector for the current state and references
  • u_lb lower bound on the optimization variable
  • u_ub upper bound on the optimization variable
  • E matrix of state dependent constraints
  • zx_lb lower bound on the state dependent constraints
  • zx_ub upper bound on the state dependent constraints

Refer to the MPC description for a precise definition of these fields.

For example, assuming mpc is an MPC object with only input constraints, and we want all states to be regulated to 2, starting at states all 3:

mpc.ctl.x_ref = numpy.ones(mpc.ctl.x_ref.shape) * 2

# repeat the following lines for every new state x 
x = numpy.ones(mpc.ctl.x_ref.shape) * 3
mpc.ctl.form_qp(x)
# use mpc.ctl.qpx together with the QP solver of your preference

An example on how to use form_qp together with the QP solver CVXOPT in Python can be found at examples/ltidt/solver_cvxopt. Additionally, an example on how to use form_qp together with the QP solver qpOASES in C can be found at examples/ltidt/solver_qpoases.

Using the C implementation

The C functions

The functions available to the user are described in the C API in the mpc.h file. The Python and MATLAB interfaces offer exactly the same functionality as the C interface, but with a simplified syntax.

For example, the C function void mpc_ctl_solve_problem(struct mpc_ctl *ctl, real_t x[]) will be called in C as mpc_ctl_solve_problem(&ctl, x), assuming both arguments exist. In Python it is called as mpc.ctl.solve_problem(x), assuming the MPC Python object was called mpc. Similarly, in MATLAB, the same function is called using ctl.solve_problem(x), assuming the MPC controller was called ctl.

In all cases, the approximate solution u_opt is found inside the ctl structure. For example, in C the first element of the sequence is accessed via ctl->u_opt[0], in Python via ctl.u_opt[0], and in MATLAB via ctl.u_opt[1].

The full C documentation is available as doxygen documentation.