Linear Programming and Approximation Algorithms
Course outline
Linear programming is a fundamental tool in both optimization and algorithm design. Our goal in this course is to familiarize ourselves with various aspects of linear optimization, so that we are comfortable with its use in our research. The course intends to cover both the innards of linear programming, such as the simplex algorithm, as well as its use, often as a black box, in designing both exact and approximately optimal algorithms.

The course will proceed in two parts. In the first part, we will look at linear programming itself in some depth, including combinatorial problems for which linear programming offers exact algorithms. In the second part, we will look at approximation algorithms that are based on duality or rounding fractional LP solutions.

Topics that I plan to cover include:

Part I: linear programming and its geometry; the simplex method, including techniques to handle degeneracy; interior-point methods; the ellipsoid method; duality and it's use in designing algorithms; network problems; integrality of optimal solutions.

Part II: LP-based approximation algorithms, including set cover, online set cover, unrelated machine scheduling, and survivable network design.

A good background in linear algebra as well as basic knowledge of computational complexity will be required.
Classes will be held Wed/Fri 2-3:30pm in A-212.

Classes begin on January 20th 27th.

Evaluation will be on the basis of assigments (50%), a mid-term (25%), and a final exam (25%).

Reference material
We will mostly follow the book Introduction to Linear Optimization by Bertsimas and Tsitsiklis for the first part of the course, and Approximation Algorithms by Vijay Vazirani for the second part. Other good reference materials are

  • Theory of Linear and Integer Progamming by Schrijver
  • Understanding and Using Linear Programming by Matousek and Gartner
  • Combinatorial Optimization: Algorithms and Complexity by Papadimitriou and Steiglitz
  • The Design of Approximation Algorithms by Williamson and Shmoys
  • Geometric Algorithms and Combinatorial Optimization by Grotschel, Lovasz, and Schrijver.

For background in linear algebra, a good book is Introduction to Linear Alebra by Gilbert Strang.

I recommend taking notes in class, as our treatment will often differ from that in books, and I may cover topics out of order. I will post an outline of topics covered in each lecture below, including source material.

Jan 27
Two examples of LPs, with two variables and three variables, with graphical solutions. An LP in general form with notation. An LP in standard form, and converting from an LP in general form to an LP in standard form. An example of such a conversion. Matrix notation.

References: Chapter 1 from Bertsimas and Tsitsiklis, particularly Sections 1.1 and 1.4.

Jan 29
Linear algebra primer: linear independence of vectors, matrix inverses, uniqueness of solutions. Subspaces, basis, dimension, and null space of subspaces. Polyhedra and convexity. Defined vertices, extreme points, and basic feasible solutions, and proved equivalence of these three.

References: Sections 1.5, 2.1, and 2.2 from BT book.

Feb 3
Generating basic feasible solutions: redefining basic solutions for polyhedra in standard form. Assuming matrix A has rank m can be done without loss of generality. A basic algorithm for generating all basic solutions: for each choice of m columns of the matrix A, check if the columns are linearly independent, and if so, set other variables to zero and solve the resulting linear equalities in m variables. Definition of basis, basic variables, basic columns. Adjacency of basic solutions and indices, and equivalence of the two.

References: Section 2.3 from BT book.

Feb 5
Extreme points in polyhedra: P does not contain a line iff P has an extreme point. If P is nonempty and has extreme point, then either the optimal value (for minimization) is negative infinity, or is obtained at an extreme point. A basic outline of the Simplex algorithm: find a starting bfs; find an adjacent bfs of lower cost and move there; repeat until all adjacent bfs have equal or higher cost. Degeneracy of basic feasible solutions. Definition of a feasible direction, and change in basic variables to maintain feasibility of equality constraints when a single non-basic variable is increased.

References: Sections 2.4, 2.5 2.6, and the first few parts of Section 3.1 from BT book.

Feb 10
An example of degeneracy, shown by considering a polyhedron with n variables and m equality constraints where n-m = 2, hence the inequality constraints could be shown in 2-dimensional space. Moving to an adjacent basis (as discussed towards the end of the previous class). Definition of reduced costs, and their importance: if all reduced costs are non-negative, the bfs is optimal (and vice versa, except for degeneracy). Effect of degeneracy in determining whether a direction is feasible or not. A more detailed outline of the Simplex algorithm. Proof that moving in the direction described will either give us another bfs of lower cost, or give us a ray along which all points are feasible and cost reduces indefinitely. Finally, proof that the simplex as described (assuming an initial bfs, and nondegeneracy) terminates in finite time, with either an optimal bfs, or a ray that demonstrates optimal cost is unbounded.

References: Sections 3.1 and 3.2 from BT book.

Feb 12
No lecture today.
Feb 17
Bland's pivoting rule, and proof that it avoids cycling at a degenerate basic feasible solution. The two-phase method for finding an initial basic feasible solution using artificial variables.

This completes a description of the basic simplex method.

References: The analysis of Bland's pivoting rule was taken from lecture notes by David Williamson (but also see Section 3.4 from BT book). The two-phase method is described in Section 3.5 from BT book.

Feb 19
The full tableaux method for simplex, and proof of correctness. Lexicographic pivoting rule and proof of correctness.

References: Sections 3.3 and 3.4 from BT book. We didn't explicitly cover the revised simplex method, but you can see that the proof of correctness for the full tableaux method requires the revised simplex method.

Feb 24
No lecture today.
Feb 26
The complexity of simplex (without proofs): examples with exponential number of pivots known for all deterministic pivoting rules. Subexponential bounds for randomized pivoting rules. Bounds on the diameter of polyhedra (linear lower bound, and subexponential upper bound). Smoothed complexity.

Duality: Dual variables as multipliers for constraints, to obtain lower bounds on the optimal cost (assuming primal in standard form). Alternative perspective: price of relaxing constraints. Guidelines for obtaining dual from primal. Taking the dual of the dual gives us the primal. Weak duality. Strong duality. Complementary slackess, and visualizing optimal primal and dual solutions using complementary slackness.

References: Section 3.7 for the complexity of simplex method. Sections 4.1, 4.2 and 4.3 for duality.

Mar 2
Farkas Lemma. Proof of strong duality using FL. Proof of FL (note that here the proof we covered in class is incomplete. We need to show that set P is closed and bounded for there to be a minima). Review of polyhedral geometry to show equivalence of representations of polyhedra.

References: Sections 4.6, 4.7, and the beginning of Section 4.8.

Mar 4
Statement of Projection Theorem, that projecting a polyhedron onto a lower dimension yields a polyhedron. Completion of proof to show that the set P is closed and bounded, from last lecture. Definition of recession cones, extreme rays for polyhedra. For a polyhedral cone, cost is unbounded iff there is an extreme ray with negative projection on the cost vector. Using this, a proof of one direction of the the resolution theorem, unifying multiple representations of polyhedra.

You should read the proof of the projection theorem, or derive it yourself. We did not cover the proof of the converse of the resolution theorem, or the proof of the statement about polyhedral cones above.

References: Section 2.8 for the projection theorem. Sections 4.8 and 4.9 for the remaining topics.

Mar 9
Ellipsoid method: affine transformations, ellipsoids, and how to do a single iteration.

References: Section 8.2 from BT.

Mar 11
Ellipsoid method: How to find a starting ellipsoid. Using perturbation to remove full-dimensionality assumption. Lower bound on volume for a full-dimensional polyhedron. The complete ellipsoid method for feasibility (assuming infinite precision arithmetic, integer values for input parameters, and matrix A has rank n).

References: Section 8.3 from BT.

Mar 18,20
No class.
Mar 23
Finding a feasible point, and optimization using the ellipsoid method. We discussed two methods. The first takes time polynomial in the number of constraints. It operates by finding a maximal set of constraints that are feasible when tight, and then using Gaussian elimination to solve these tight constraints. Then optimization can be done by combining the primal and dual into a single linear program. The second method relied on using the approximately-feasible point to obtain a rational point that was feasible, and then using binary search to find an optimal solution.

Not only does the ellipsoid method give us a polynomial time algorithm for linear programming, the running time is also independent of the number of constraints. Further, it reduces optimization to separation: given a separation oracle for a (full-dimensional) polyhedron, we can optimize over it.

Started talking about interior-point methods, focusing on primal path-following methods. Discussed KKT conditions.

References: The first method for finding a feasible point and optimization is from Theorem 10.4 in the book by Schrijver, and Section 8.4 from the BT book. The second method is from Lemma 6.2.9 and Theorem 5.3.19 from the GLS book. The discussion of interior point is from Section 9.4 of the BT book.

Mar 30
Completed discussing the primal path-following interior-point algorithm. In an iteration, given a primal and dual point in the interior for some \mu, we use the second-order Taylor-series expansion for the objective of the convex program P(\mu) around the given starting point to obtain feasible primal and dual variables for a smaller value of \mu. Thus, we follow the central-path approximately, and stop when \mu is small enough. We also saw how to obtain feasible primal and dual values in the interior.

For obtaining an initial solution, for the scaling step to ensure that each coordinate of each bfs is at most 1, I mistakenly said that we should scale the variables. Actually, the constant vector b needs to be scaled, to ensure this property.

References: Section 9.4 in the BT book. We skipped the proof of Theorem 9.7.

Apr 1
Inegrality of polytopes, and Total Unimodularity. We confined our discussion to polytopes, since these are of mos interest to us. Defined integral polytopes, and use in poly-time algorithms for determining solutions to Integral Linear Programs (ILPs). Defined unimodular and totally unimodular matrices, and proved that if A is TUM, b integral, then polytope Ax >= b is integral. Discussed properties of TUM matrices: transposing, negating, and concatenating identity matrices preserves TUM. Characterized TUM matrices in terms of dividing subset of columns into two sets, and used this to give a poly-time algorithm for max-weight matching in bipartite graphs.

References: The main reference is Chapter 19 from the Schrijver book, but the material selection is quite selective.

Apr 5
Characterizations of TU matrices. We proved a few different characterizations of TU matrices, including that used in the previous lecture.

References: The lecture basically covered the proof of Theorem 19.3 from the Schrijver book, but we skipped characterization (vi): if each entry of matrix A is +1, -1, or 0, A is TU iff the sum of entries in any square submatrix with even row and column sums is divisible by four.

Apr 6,8
No class.
Apr 13
TUM can be determined in polynomial time (without proof). TDI and use, example of matching in general graphs (Cunningham-Marsh theorem) without proof. Started approximation algorithms, did a 4 log n - approximate algorithm for set cover by randomized rounding.

References: The proof of polynomial determination for TU matrices can be found in Schrijver, Chapter 20. The discussion of TDI was from the lecture notes of Michel Goemans, available here. A good proof of Cunningham-Marsh is in Chandra Chekuri's notes here. The part on set cover was from Chapter 14 of Vazirani's book.

Apr 15
A 2-approximation algorithm for scheduling on unrelated parallel machines. We also discussed the integrality gap and it's use as a lower bound.

References: Chapter 17 of Vazirani's book. Integrality gap is discussed in Section 12.3.

Apr 20
A polylog-competitive algorithm for online set cover, by multiplicative-weight updates and primal-dual analysis.

References: Chapter 4 of the book "The Design of Competitive Online Algorithms via a Primal-Dual Approach" by Buchbinder and Naor (this is available online). Note that of the three algorithms in the chapter, we only covered the first, as well as Section 4.4.1 for randomized rounding.

Apr 27
A 3-approximate algorithm for metric facility location, by a dual algorithm.

References: Chapter 24 of Vazirani's book. We did not cover Sections 24.4.1 and 24.4.2, but these are worth reading too.

Apr 29, May 2
A 3-approximate algorithm for generalized Steiner network, by iterative rounding.

References: Chapter 23 of Vazirani's book. We did a 3-approximate algorithm (for which there is a simpler counting argument), while the book gives a 2-approximation. The book also gives an example for which the algorithm is tight.

Assignment Policies
  • Each student gets 6 late days for assignments, which can be used in any combination, and for any reason (e.g., you could turn in assignment 2 two days late, assignment 3 one day late, and assignment 4 three days late. Or you could turn in assignment 1 six days late).
  • The first assignment that is late after exhausting these late days will be graded out of 50% of the total marks. Any further late assignments will not be graded at all. Any assignment turned in more than 6 days late will not be graded at all.
  • You may refer to any books or notes, but not to any online resources.
  • You may discuss the problems with others in the class, but you must write up the solution by yourself, in your own words.
  • Please write in your submission the people with whom you discussed the problems, as well as any references you used.
  • Please write clearly and legibly, and include how you arrived at the solution!