# Linear Programming and Healthy DietsWatch

Announcements
#1

Hi

How do you get the constraints involving the y variables from 4x1+3x2+9x3 >= 6y1+2y2+y3 ?

I understand we are trying to find a minimum for 4x1+3x2+9x3, so we are looking for the largest possible lower bound m :

4x1+3x2+9x3 >=m

m is a linear combination of the constraints (x1+x2+x3 >=6 etc) which I have worked out but then how do you get the constraints for the y problem.
0
9 months ago
#2
(Original post by Nightowk)
...
Not an expert on this, so I can't tell you why, but as to how:

Each constraint in the new problem corresponds to a varible in the original problem. It's easiest to see looking at the matrix A.

So, our first constraint (relating to x1) refers to the first column of the matrix A and we have and this is less than or equal to the coeff. in the original objective, 4.
0
#3

How did the author get the constraints without knowing that A^t will be the constraints matrix for the maximization problem given a matrix A for a minimization problem , or that the requirements vector and objective function vectors will switch places.

https://imgur.com/a/tttX63
0
9 months ago
#4
As ghost walker suggests, you're trying to look at the dual problem which comes out of a Lagrange approach where each constraint variable represents whether the constraint is active (determines the solution) or not. I'll have a proper read of the post later, but googling.
Lagrange dual linear programming
Should be of interest.
(Original post by Nightowk)

How did the author get the constraints without knowing that A^t will be the constraints matrix for the maximization problem given a matrix A for a minimization problem , or that the requirements vector and objective function vectors will switch places.

https://imgur.com/a/tttX63
0
9 months ago
#5
What you do seem to be doing is to formulate and solve the dual problem. As a bit of s background, when solving an LP problem in n variables, typically you have about 2n constraints for the variables: lower and upper bounds and some linear combinations of these variables. The solution always lies on the boundary and generally involves n of the 2n constraints. If you know which n they are, it's easy to solve. The difficulty is knowing which constraints are active. It's also important to know whether the generated solution is feasible (satisfies all constraints) or not.

Formulating the dual problem involves a Lagrange view, where each constraint had a variable (Lagrange multiplier) and when this is zero, the constraint is inactive as it does not determine the solution and hence it could be thrown away. The optimal solution is then determined by a linear combination of the constraints. The dual problem for an LP problem is another LP problem, but now in 2n variables. Sometimes the dual problem can give interesting insights into a problem, sometimes it's more efficient to solve, but it is a central idea in linear and quadratic programming problems.

When you generate a set of infeasible points like you originally suggest, you can form a lower bound on the value of the criterion. When you generate feasible points, you must generate upper bounds (by definition). The problem for the former is obviously knowing which constraints will generate infeasible solutions which are lower bounds. For most textbook problems, this probably isn't too hard, but in the general case, it will be hard.

https://en.m.wikipedia.org/wiki/Karu...ker_conditions

(Original post by Nightowk)

How did the author get the constraints without knowing that A^t will be the constraints matrix for the maximization problem given a matrix A for a minimization problem , or that the requirements vector and objective function vectors will switch places.

https://imgur.com/a/tttX63
Last edited by mqb2766; 9 months ago
0
#6
(Original post by mqb2766)
What you do seem to be doing is to formulate and solve the dual problem. As a bit of s background, when solving an LP problem in n variables, typically you have about 2n constraints for the variables: lower and upper bounds and some linear combinations of these variables. The solution always lies on the boundary and generally involves n of the 2n constraints. If you know which n they are, it's easy to solve. The difficulty is knowing which constraints are active. It's also important to know whether the generated solution is feasible (satisfies all constraints) or not.

Formulating the dual problem involves a Lagrange view, where each constraint had a variable (Lagrange multiplier) and when this is zero, the constraint is inactive as it does not determine the solution and hence it could be thrown away. The optimal solution is then determined by a linear combination of the constraints. The dual problem for an LP problem is another LP problem, but now in 2n variables. Sometimes the dual problem can give interesting insights into a problem, sometimes it's more efficient to solve, but it is a central idea in linear and quadratic programming problems.

When you generate a set of infeasible points like you originally suggest, you can form a lower bound on the value of the criterion. When you generate feasible points, you must generate upper bounds (by definition). The problem for the former is obviously knowing which constraints will generate infeasible solutions which are lower bounds. For most textbook problems, this probably isn't too hard, but in the general case, it will be hard.

https://en.m.wikipedia.org/wiki/Karu...ker_conditions
0
X

new posts
Back
to top
Latest
My Feed

### Oops, nobody has postedin the last few hours.

Why not re-start the conversation?

see more

### See more of what you like onThe Student Room

You can personalise what you see on TSR. Tell us a little about yourself to get started.

### University open days

• University of East Anglia
Sat, 29 Feb '20
• Edinburgh Napier University
Sat, 29 Feb '20
• Teesside University
Sat, 29 Feb '20

### Poll

Join the discussion

#### Do you get study leave?

Yes- I like it (431)
58.64%
Yes- I don't like it (40)
5.44%
No- I want it (214)
29.12%
No- I don't want it (50)
6.8%