The Student Room Group

Undergrad Mathematics for Optimisation

Hi Everyone,

I'm looking for some advice from someone who has studied Optimisation as an undergrad or postgrad, and who has a good understanding of the underpinning mathematical concepts.

I'm starting an MSc in September, and perhaps unlike many people who will start the course, my maths knowledge does not extend beyond my A-Levels.

The MSc is in Business Analysis, so I don't think that the maths will be too rigorous, not like a pure Operations Research course might be. It will be more about understanding the concepts and being able to implement the techniques using software packages.

Still, I'm conscious I've missed out on undergraduate maths and I'd like to further my knowledge (to the extent that I can in the next three months!) and try and improve my understanding in some key areas.

What would be the best areas to focus on do you think? I was considering the MIT opencourseware lectures on linear algebra: http://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/video-lectures/

Differential equations perhaps?

Any advice would be really appreciated, particularly if you can explain why the suggested skills are useful from an Optimisation perspective.

Thanks for you thoughts.

Tom
Reply 1
Optimisation is a really big field but its foundations are largely contained within linear algebra. For business analytics, I'm not really sure what areas you'd want to focus on. I guess the most common type of optimisation you'll look at in business is curve-fitting, i.e. finding parameters of a hypothesised curve which most closely fits the data. That's linear regression and straight from linear algebra, so I think linear algebra is a great place to start (and those lecture videos from MIT are excellent), you could do a lot worse than to focus most of your energy on that single course.

If you're in doubt, buy a (used) copy of Numerical Recipes and have a read. The prose is very good and will become an excellent resource for the future. It gives you some more practical discussion of optimisation.
Reply 2
Thanks very much mik1a. That's really helpful advice. The numerical recipes book looks like a great resource.

I think a large part of the optimisation course is developing an understanding of linear programming, which I previously thought was synonymous with optimisation. I can see now that regression would be considered an optimisation technique, in that you're trying to minimise the error.

My hunch was that the linear algebra course would be pretty useful, so I'm glad you have confirmed that.

Tom
Reply 3
Linear programming is a subset, it concerns optimisation where the objective is a linear function of the arguments. That's not so hard, you can think of it like a marble rolling on a flat (but inclined) plane. All you need to do is find the path to the most downhill part. If the constraints (the outer boundaries) are convex then it's easy and the ball will naturally roll down to it, otherwise you need some clever algorithm (think of a marble maze that you tilt- it won't automatically roll to the bottom if there are interior walls.

That's a pretty cool example, but more often you will see objectives that are nonlinear functions of the arguments. The most obvious is y = x^2. What's the minimum? What about z = x^2 + y^2? What about z = (1 - x)^2 + 100*(y - x^2)^2? That last one is an interesting one (can you guess it from inspection?). You notice things get more interesting when you've got to minimise something like that rather than a standard linear function. Also, bounds no longer become strictly necessary (e.g. min of y = x^2 vs. min of y = x subject to x > -2).

Curve-fitting is quadratic optimisation because you're often looking for a curve that minimises the sum of the residuals squared. The residuals are the distances between the curve and the data points, and once you square them they all become positive anyway. If the curve hits them all, your objective is zero, otherwise it's some positive number that's hopefully not too large.

Finding a general solution to this sort of problem is still called "linear algebra", but the objective function isn't strictly linear. So the field of matrices and vectors extends far beyond linear programming.

If you hadn't noticed, it's something I'm interested in quite a lot!
Reply 4
Great. Thanks again for the insights, Mik1a.

I have found the minimum values of your functions, but not by inspection I'm afraid! (honestly, how would you do that with the last one? haha). I found a nice 3d function grapher.

So would it be correct to say:

If all constraints are linear, then it's a linear programming problem.
For linear constraints that are convex the objective function is easily minimised.
For linear constraints that are not convex, then we have increased complexity and clever algorithms are required to find a solution, if indeed the problem is soluble.
If constraints are non-linear, then we are no longer talking about linear programming, but perhaps we are now talking about 'convex optimisation of non-linear functions'.

I just watched an introductory Standford lecture on Convex Optimisation. Fascinating stuff. The professor suggested that linear programming was a nice follow-on topic after studying least squares regression and singular value decomposition. Going back to your original post, I think the starting point for all of these techniques is learning some linear algebra. Would you agree?

Thanks again for your enthusiasm and willingness to engage on this. It's great to develop a little knowledge.

Tom
Reply 5
I'd say that's on the right track, but it helps to make a distinction between the objective function and the constraints. You can have a linear objective with nonlinear constraints, and that becomes quite tricky too (e.g. what is the minimum of z = 4x - 3y on the unit circle?). You can also have a nonlinear obective function with linear constraints (e.g. minimise y=x^2 with x>4). The algorithm that you would use will be very different.

Of course we're always talking about 2D functions here, but in reality your objective might depend on 20 or 5000 variables. That's why the geometry thought process is a good intro, but linear algebra is the right thing to take you forward.

honestly, how would you do that with the last one?


You can tell because it's the sum of two squares, so the minimum will be zero, and that is achieved when each term is zero. So 1 - x has to be zero (so x=1) and y - x^2 has to be zero (so y = 1). The minimum is (1,1), but the function is designed to have a very shallow gulley along the curve y=x^2. It's a famous test for unbounded nonlinear optimisation algorithms because that gulley makes it very hard to figure out exactly where the minimum is. It's called the Rosenbrock function.

Going back to your original post, I think the starting point for all of these techniques is learning some linear algebra. Would you agree?


Yes. And stick with it, the interesting stuff comes after quite a number of lectures on the basics.
(edited 8 years ago)

Quick Reply

Latest

Trending

Trending