Basic optimization problem video transcript

Basic optimization problem video transcript

Hello everybody! Today we will be talking about basic optimization problem formulation.

One of the most important steps in optimization is formulating a well-posed and meaningful problem that you can solve and then actually understand the results of. You might have guessed but this firmly falls within the optimization subcategory for the course.

First, I will discuss the objective function. The objective function is what we’re trying to optimize. Generally we formulate objective functions as something we’re trying to minimize. So here we see that we are trying to minimize some function of x. In this case f_objective of x. This objective function might be a cost function, it might be some measure of performance of the system. It could be the fuel burn of the aircraft, it could be the power produced from a wind turbine, it could be the bushels of corn produced from a farm. There are lots of different ideas for objectives and depending on which one you select and what you’re trying to actually solve for in the problem you can get very different results.

Additionally we’ve been talking about how minimizing the objective is what you want to do in most formulations. This is really just kind of a nomenclature thing. It’s kind of a standard. We can also maximize anything simply by minimizing the negative value of it. So let’s say we’re trying to maximize the lift for an airplane. We’ll try to minimize the negative of the performance, if that makes sense. This is so we can keep all the underlying math of optimization methods the same and focused on minimization problems.

We can also minimize a function that’s a conglomeration of other functions. Here f_objective of x is simply a summation of g(x) and h(x). This is one way of performing multi-objective optimization. This will later be covered in more detail in our multi-objective optimization lecture. For now let’s just focus on minimizing a single objective. Objective functions are also known as functions of interest. Next we’ll discuss design variables. So design variables in this case are labeled as x and when I say x I don’t necessarily mean a scalar, I mean x could be a vector or any shape. When I say design variables or x as well it doesn’t mean that we have to have just one type of design variable so we could have many different types of design variables here. For example we could be trying to optimize the aircraft weight, trying to minimize that, and we could have the wing structure thickness as one set of design variables at the same time as we have the wing aerodynamic shape as another simultaneous set of design variables.

Next up we have constraints, and constraints just like an objective function are considered functions of interest. This you can think of as outputs from your model. Constraints can be inequality constraints or equality constraints. So when I see a constraint here, like g, take a look at this g(x). We can have a lower bound of g shown on the left here and an upper bound on g shown on the right here and we know that this function g(x) must lie somewhere within these lower and upper bounds. Again these can be scalars, vectors, or arrays, or any arbitrary size as well.

In addition to inequality constraints there are also equality constraints. Here you see you’re also enforcing an h(x) equal to h_equality. So this is where we say during the optimization some output from the model must be equal to some value. Let’s say that we’re designing a wind turbine and we know that we want it to produce 15 megawatts of power so we say okay the power must be 15 megawatts when the wind is blowing. Please try to design the cheapest blade that we can manufacture, for instance. That’s an example of an equality constraint.

Now I’ll go through an example 2d optimization. Take a look at this function. It’s a pretty complex function. We have x and y directions and then the output is shown on contour lines and with color plots. We can see that there is one local maximum right here and then there are also four local minima. All four of these local minima have the same outputted function value of zero. This makes it a very interesting optimization problem this is because there are four different places where the optimizer can settle into. All that being said for today’s lesson we won’t necessarily take advantage of those four different types of optima. Here is what the problem formulation looks like for this function, if we were to write it out. We have x and y as the design variables and f_objective or the output being shown on the contour plot. So here we’re trying to do an unconstrained optimization where we’re minimizing this 2d function with respect to the design variables x and y.

Let’s take a look at what an actual gradient based optimizer SLSQP does to solve this problem. First we just start at (1,1) as kind of a default starting point and we see the optimizer query the design space in different places. It does a few line searches and then settles into a local optimum here. This is where the function value is zero this is what we would expect. We could imagine that if we were to start at (1,1) it would kind of settle into this local bowl where there’s a local optimum. This is great.

Next let’s move on to adding an inequality constraint. Here we add a constraint where we say x squared plus y squared must be less than or equal to 4. Remembering back to geometry days this means that it must be within a circle of radius 2 centered at the origin. So let’s take a look at what this means for the design space. Superimposing the constraint onto the original design space we see that the optimal answer must lie within this circle. Again we call upon SLSQP to perform an optimization here and we find that the lowest value of the function is kind of in the upper right hand corner of this circle. If you take a look at the contour lines this makes sense and again it stays within this radius of 2 circle.

Now let’s further complicate this problem by adding another constraint. Now we have a constraint which says x plus y equals one. That creates a straight line shown in green here where the optimal answer must lie along this line. And again we still have the previous circle constraint so the optimal answer must lie along this line and within the circle. Intuitively we know where this should be, it should be somewhere between the two edges of this circle and along the green line but let’s see what the optimizer does to figure this out. First it jumps far outside the circle, violating the constraints, but then it realizes “Oh I need to stay on this line here and I need to get within the circle” so it moves within this line and within these constraints to get to an optimal answer. We see it settle in the right hand side here at the lowest function value that is along the line and within the circle. This is great to see as well. Here we have a reasonably constrained problem that the optimizer is able to handle in pretty quick time.

Now let me go into a bit more detail about what I mean when I say poorly posed and well posed optimization problems. Here is an example of a poorly posed problem. The reason why it’s poorly posed is that on the extremes and the edges of this graph we see that there are very sharp changes in the function. However near the center where the function value is lower there are many different optima that all have the same value. Specifically there are infinite optima along this circle. This means that the optimizer could settle into any point along the circle and it would be equally optimal. This is called a poorly posed problem because any one of the optimal designs might have markedly different design variable values. It would be challenging to understand the physical results of these designs if there are many different optimal designs that have the same performance.

Let’s take a look at two different optimizations here on this function. If we start here it will converge to a point on the circle. Remember all points on the circle are equal to the optimizer. And if we start down here we’ll converge to a different point on the circle. This shows that there are many different possible optimal answers which could lead to confusing results for the practitioner trying to interpret the results. A well-posed problem, on the other hand, is usually unimodal (which means that it has one minima) and is also smooth around that minima. This is an example of an optimizer converging on a well posed problem. Now again both these cases were 2d examples so that we could visualize them easily but you can imagine for a physics-based multidisciplinary model it might be easy to unknowingly create a very poorly posed problem.

There are a few different ways that poorly posed problems often crop up. One is that you might have conflicting constraints that cause there to be no feasible design space. There also might be different discontinuities in the design space making it challenging for the optimizer to jump between different areas. Creating a well-posed optimization problem will be covered in more detail in one of the linked lectures.

So the main takeaway from this is that formulating a well-posed and reasonable problem is very important. You should start with the most simple optimization problem you can and slowly build up complexity, solving each problem along the way. It might be really tempting to throw the kitchen sink at the optimizer and say here is everything that I ever want to optimize, please do it. But often the optimizer fails so you kind of want to set the optimizer up for success and to do that you start with the simplest problem that you can and then progressively build it up. What I mean by slowly adding complexity is to add design variables or constraints one by one or in groups that you understand well. This allows you to better understand how changing the optimization problem formulation affects the design space and optimal results.

Now again these are just basic optimization problem formulation considerations. We will have more advanced information both within the notebook accompanying this lecture and then also in the advanced optimization problem formulation lecture. I have put links to relevant resources including an interactive python notebook and the MDO book in the description. Additionally the description also has time stamps for this video and links to other relevant lectures. Please check out other videos in this series and make sure to mash those like and subscribe buttons. Thank you!