The 2-Phase Method

Basic concepts and Principles

We start from a linear programming problem as
Minimize \( C^t x \)

Subject to:

\( \begin{matrix} Ax \le b \\ x\geqslant 0 \end{matrix} \)

Where as always:

\( A \in \mathbb M _{mxn}, \, rnk(A)=m ,\,\, \)\( b \in R^{m}, \, C \in R^{n}, \,\, n \geqslant m \)


Note first of all, that this problem is not written in standard form (see section , The simplex Algorithm)

If you want to see a two phase method complete example click here.

We have seen at section Simplex Pivot element how to pass from a Linear programming problem to it standard form by using slack variables.

The first problem is to find an identity mxm submatrix of A for starting simplex algorithm, wich can be not easy.

For this, we will apply the called two-phase method. To sum up:

Two-phase method overview



Phase 1

1) We look for an identity submatrix of dimension mxm within the complete matrix A. If neccesary we add as much artificial as rows we have, this is m. So that to complete a identity mxm submatrix to start the algorithm.

With this we do not modify the extreme points set of the original problem, we have just added more dimensions to the feasible set polytope, in order to have a solution to start the algorightm at first phase.

2) Change the original objective function for other one wich have all values as zero except for those components in the new artifical variables where have 1 as (ie, vector C = (0, .., n-times ..., 0, 1, .., m-times (as much) ..., 1).

3) Start the algorithm with this problem until reach one of these cases:

   3a) Reached the optimal solution = zero: this means artificial variables have left the basis: in this case we can move to phase 2 of two-phase method.

   3b) If we reached the optimal solution finite non-zero: means that original problem has no solution.

Phase 2

Eliminate artificial variables, and continue algorithm just by:
1) Changing the original objective function C.

2) The basis is the remaining from the phase one.

3) Continue the algorithm with these changes until reach one of the 4 possible outputs known of the problem.


The idea of phase one is to remove the artificial variables from the basis and get the trivial solution (this is zero) for the extended problem. In this case, we can go on to phase two.

We will see in this section a complete example of the two phase method and how to handle artificial and slack variables. An even more complete is here.

Extended Theory


We saw that every linear programming problem can be transformed into a standard form, for example if we have:

    Max(2x1 + 3x2 + 4x3)
     Subject to
    3x1 + 2x2 + x3 ≤ 10
    2x1 + 5x2 + 3x3 ≤ 15
    x1 + 9x2 - x3 ≥ 4
    x1, x2, x3 ≥ 0

We can transform it as follows:

1) Change the sign of objective function to get a minimization problem:

    Min(-2x1 - 3x2 - 4x3)

2) Change the sign of last constraint:

    3x1 + 2x2 + x3 ≤ 10
    2x1 + 5x2 + 3x3 ≤ 15
     -x1 - 9x2 + x3 ≤ -4

3) Introduce slack variables to transform inequalities into equalities:

    3x1 + 2x2 + x3 + x4 = 10
    2x1 + 5x2 + 3x3 + x5 = 15
     -x1 - 9x2 + x3 + x6 = -4
    x1, x2, x3, x4, x5, x6 ≥ 0

4) And the last, again change of sign to the last constraint:

    3x1 + 2x2 + x3 + x4 = 10
    2x1 + 5x2 + 3x3 + x5 = 15
     x1 + 9x2 - x3 - x6 = 4
    x1, x2, x3, x4, x5, x6 ≥ 0

We are done, we do have the standard formulation problem, what was a 3-dimension problem became a 6-dimensions one, this is not too important since final calculations is the computer who make them for us.

With these changes, the matrix A became:
\begin{pmatrix} 3 & 2 & 1 & 1 & 0 & 0 \\ 2 & 5 & 3 & 0 & 1 & 0\\ 1 & 9 & -1 & 0 & 0 &-1 \end{pmatrix}
As we can see, we do not have yet an identity submatrix to start the algorithm. In order to do this we apply the two phase-method, which consists of the following:

Phase 1

1) We just need add one artificial variable to last constraint to complete an identity 3x3-submatrix from A. We do so:

    3x1 + 2x2 + x3 + x4 = 10
    2x1 + 5x2 + 3x3 + x5 = 15
     x1 + 9x2 - x3 - x6 + x7= 4
    x1, x2, x3, x4, x5, x6, x7 ≥ 0

Therefore, the matrix A remains as follows:
$$ A=\begin{pmatrix} 3 & 2 & 1 & 1 & 0 & 0 & 0\\ 2 & 5 & 3 & 0 & 1 & 0 & 0\\ 1 & 9 & -1 & 0 & 0& -1 & 1 \end{pmatrix}$$
2) We can now start the algorithm, but we as a vector of zero costs for all components, except those that correspond to artificial variables, ie we take the objective function so:

    Min (x7)

So, the costs vector is now:

    Ct = (0,0,0,0,0,0,1)

The idea is to obtain a solution with all artificial variables =0, ie, we want to find a solution S0 such us

    x7 = 0

This will make the artificial variables get out of the base.

Any other solution we obtain (finite or infinite) at the end of the first phase, will make the problem has no solution or uncompatible constraints.

Phase 2

If we have founded S0, the solution at the end of the first phase, we can eliminate all the artificial variables and change the objective function to the original one, in our example we again take:

    Min(-2x1 - 3x2 - 4x3)

As the objetive function, therfore the cost vector is now:

    Ct = (-2,-3,-4,0,0,0)

Note we have remove the variable 7. Then we can apply the algorithm. It is important that, if we get optimal solution for variables that are not original (basic variables) simply have to remove them from our solution, this is make them zero.

For this example we, apply the algorithim to have a finite optimal solution, numerically we obtain:

x1 = 0.0
x2 = 0.84375
x3 = 3.59375


What makes our maximization

Z = 16.90625

TYou can see step by step by executing this sample here with our apps!!

In practice the number of iterations of the simplex algorithm is between 3m / 2 and 3m, which is not bad if as we know for theory the number of extreme points is:
$$ \begin{pmatrix} n \\ m \end{pmatrix} = \frac{n!}{m!(n-m)!} $$





Was useful? want add anything?



Post here

Post from other users





Post here
Update cookies preferences