Cover Page

Chemostat and Bioprocesses Set

coordinated by Claude Lobry

Volume 3

Optimal Control in Bioprocesses

Pontryagin’s Maximum Principle in Practice

Jérôme Harmand

Claude Lobry

Alain Rapaport

Tewfik Sari

images

Introduction

Applying optimal control theory to concrete examples is often considered a difficult task as mastering the nuances of this theory requires considerable investment. In literature in this field, there are many books that discuss optimal control theory (e.g. [LEE 67, VIN 00]) illustrated using examples (e.g. [BRY 75] or [TRÉ 05]), and books dedicated to families of applied problems (e.g. [LIM 13]). The objective of the current book is to present a pedagogic view of the fundamental tenets of this theory, a little in the style of Liberzon (see [LIB 12]), and to guide the reader in the application of the theory, first using academic examples (the swing problem, a driver in a hurry, also known as “the double integrator” or “the landing moon” problem), and then moving on to concrete examples in biotechnology, which will form the central part of the book. Special emphasis has been laid on the geometric arguments and interpretations of the trajectories given by Pontryagin’s maximum principle (PMP).

While this book studies optimal control, it is not, strictly speaking, a book on optimal control. It is, first and foremost, an introduction – and only an introduction – to PMP. It is seen that PMP is one of the tools used in optimal control theory. Optimal control aims to determine a control signal (or action signal), which will make it possible to minimize (or maximize) an integral performance criterion that brings in the state of a dynamic system (with constraints if required) and doing so within a fixed time period or with a free terminal time. In many situations, when PMP is applied we can comprehensively characterize the properties of this control, understand all the nuances of its “synthesis” and may even have the value for the control to be applied at any point depending on the system state.

At a point in time where a basic computer makes it possible to envisage the use of optimization techniques that are said to be direct1 for a large number of problems encountered in engineering, it is valid to wonder about the benefits to be gained by turning to a method that enables the computation of optimal analytical solutions. On the one hand, to do this would be to forget that using a numerical optimization procedure requires taking into account the specific initial conditions of the dynamic system under consideration, which limits how generic the computed control can be. On the other hand, when an optimal control is available, it makes it possible to compute the minimal (or maximal) value for the optimization criterion, which is not possible with a numerical approach (except for some very particular cases). By doing this, and doing this independently of the practical constraints that may lead a user to apply a control that deviates, however minimally, from the theoretical optimal control, we have a means of quantifying the distance between the theoretically optimal trajectories and those observed in reality based on experiments carried out on the real system.

Over the past few years, the control of bioprocesses has seen a startling growth; this is, notably, due to the extraordinary development in the world of sensors. Until quite recently, only physical quantities (such as temperature, pressure or flow rates) could be precisely measured. Today, however, it is possible to take online measurements of system variables that can be called functional, such as the substrate concentration or the concentration of bacteria in the reaction environment. Consequently, many technologists will state that control in biological systems – which often consists of keeping certain values constant – no longer poses a major problem. However, in our opinion, this view tends to forget that control theory not only seeks to stabilize a system and reject disturbances, but also tries to calculate the set-point trajectory. In other words, it attempts to establish around what state the system must be operated, both in terms of optimality, as well as to effectively control it so that the values of the variables of interest can, as far as possible, stay close to this set-point.

The first part of the book titled “Learning to use Pontryagin’s Maximum Principle” indicates that it offers an approach that is based on learning procedures to resolve equations (rather than focusing on a theoretical discussion of fundamental results), which are usually rather difficult to access in existing literature. In Chapter 1, we revisit concepts as basic as the minimization of a function, which, by extension, allows the minimization of a functional through the calculus of variations. After having presented the limitations, which relate specifically to the function classes to which the control must belong, Chapter 2 will present the terminology used in optimal control and PMP. Chapter 3 presents several academic applications and problems that highlight some nuances of PMP, especially the importance that must be accorded to questions of regularity of control.

The second part of the book, “Applications in Process Engineering”, is comprised of three distinct chapters that focus on problems that are specific to process engineering and biotechnology. In Chapter 4, we describe a problem of the optimal startup of a biological reactor. We will see that in order to maximize the performance of the bioreactor (that is minimize the time in which a resource – here, a pollutant – reaches a given threshold), the control is highly dependent on the type of growth function under consideration.

In Chapter 5, we go on to examine the optimization of biogas production. More specifically, we propose – given particular initial conditions of the system, which is in two dimensions – a solution to the problem of maximizing biogas production within a given time range. We can show that the constraints on the control (typically applied to the minimum and maximum acceptable limits) greatly constrains the proposed solution.

Finally, in Chapter 6, we will discuss the optimization of a membrane filtration system. These systems are being used more and more in biotechnology. Filtering through a membrane consists of maintaining a pressure difference, called the transmembrane pressure (TMP) across a membrane immersed in a fluid. The force created results in the movement of fluid from the side where pressure is greater to the side where pressure is lower. As this happens, elements in the fluid that are larger than the pore size are retained by this membrane, allowing these elements to be filtered out of the fluid. Over time, these elements will then clog up the membrane. At this point, we must either increase the TMP to maintain a constant flow across the membrane or accept that there will be a continuous decrease in the flow across the membrane until such time as all pores are clogged. To limit this phenomenon, we can regularly unclog the membrane, for example using a backwash fluid. If the membrane performance is defined as the quantity of fluid filtered over time, the question may arise as to which backwash strategy would be most appropriate in order to maximize the quantity of fluid filtered over a given time period. In practice, this is the same as determining at what time instants, and how often the backwash must be applied, keeping in mind that clear water is used during the backwash; the volume of this water will then be subtracted from the performance criterion. We thus find ourselves faced with an inevitable compromise: unclogging is essential to keep the membrane as clean as possible, but this must be carried out with the lowest possible frequency so that the filtration performance is not affected. If there is no model of the installation, we have little choice but to proceed through trial and error. We take a grid of the time instants, and we fix the duration of these washes; the backwash is then carried out in proceeding with experiments, while keeping in mind that the initial state of the membrane may play a large role here.

If we are able to obtain a model of the filtration membrane, we can then ask whether control tools may or may not be used. It is important to note here that this type of model is generally nonlinear. With a direct approach we may – depending on the initial conditions given – obtain a specific filtration/backwash sequence to be applied in order to maximize the system’s performance. But how can we find out whether the algorithm that we are using is a global solution? As the model used is not linear, it is certainly possible that another policy will make it possible to obtain identical, or even better, performances. In this book, we will see that characterizing the optimal control using PMP will make it possible to completely resolve the same problem even if the applicability of the optimal solution may pose practical problems that must then be resolved. In effect, while the real solution requires the calculation of the time instants when the backwash must be applied, applying PMP requires that the controls must be constrained such that they belong to sets that are much larger in order to guarantee the existence of this optimal control. In reality, these controls may take values that make no physical sense. However, this is not the point to focus on here, as, in practice, several strategies may allow us to find approximations for these values (see, for instance, the theory proposed in [KAL 17]). The essential result that must be kept in mind is that the precise values of the control to be applied can only be found by using PMP.

PART 1
Learning to use Pontryagin’s Maximum Principle