# Séminaires 2014 - 2015

**Ceci est la page web du séminaire de l'équipe Optimisation Combinatoire du laboratoire G-SCOP, à Grenoble.**

Sauf mention contraire, le séminaire de Mathématiques Discrètes a lieu le jeudi à **14h30** en **Salle C319**. Les responsables sont **Louis Esperet** et **András Sebő**, n'hésitez pas à les contacter.

- Jeudi 18 juin 2015
**(à 14h30)**:**Maria Chudnovsky**(Columbia University) : Long holes and chromatic number

We will discuss a recent proof of a conjecture of Gyarfas, stating that the class of graphs with no long induced cycles is chi-bounded.

Joint work with Alex Scott and Paul Seymour. - Jeudi 18 juin 2015
**(à 11h)**:**Kristina Vuskovic**(University of Leeds) : On graphs that do not contain a theta nor a wheel

A hole in a graph is a chordless cycle of length at least 4. A wheel consists of a hole and a node that has at least three neighbors on the hole. A theta consists of 3 internally node disjoint paths between the same two nodes, such that any two of the paths induce a hole. In this talk we give a structural characterization of graphs that do not contain wheels and thetas as induced subgraphs.

Joint work with Nicolas Trotignon and Marko Radovanovic. - Mardi 16 juin 2015
**(à 14h30)**:**Miklos Molnar**(LIRMM, Montpellier) : New Solutions of Constrained Spanning Problems

Given a graph with positive costs on the edges and a set of vertices, finding a structure with minimum cost which connects the vertices is a classic problem. When no constraints are applied, the minimum spanning structure is a sub-graph: a tree. We examine the spanning problems under constraints. To solve them, spanning tree solutions have been investigated. However, often the solution is not a sub-graph. To find the optimum, an extension of the tree concept were proposed. A hierarchy is obtained by a graph homomorphism between a tree and a target graph. Since this spanning structure can refer vertices (and edges) of the target graph several times, it is more flexible to satisfy constraints. The problems analyzed first are the degree-constrained spanning problems and the end-to-end constrained spanning problems with multiple constraints. Hierarchies correspond to the optimal solutions of these new problems which can be approximated in some cases (while the constrained spanning trees cannot). The hierarchy solution is pertinent: spanning hierarchies can solve cases where trees meeting the constraints do not exist. In other cases, hierarchies outperform trees. - Jeudi 28 mai 2015
**(à 14h30)**:**Yuri Faenza**(EPFL, Lausanne) : On Approximating the Knapsack Polytope (in the original space)

The knapsack problem is a classic in combinatorial optimization: given a set of objects, each coming with a weight and a profit, and a threshold W, choose a set of objects of maximum profit whose total weight is at most W. While it is NP-hard, it is also known that the value of its optimum can be approximated up to arbitrary precision in polynomial time using an FPTAS.

The knapsack polytope P is the convex hull of the characteristic vectors of feasible solutions to knapsack. A natural question is whether we can "translate" the algorithmic approximation given by the FPTAS into a polyhedral approximation. More formally: how many inequalities are needed to describe a polytope Q containing P and such that, for each cost function c, the maximum of c over Q is at most (1+ epsilon) times the maximum of c over P? In this talk, we present some results on this question, focusing on the case when we require Q to be described via inequalities in the original space.

Joint work with Laura Sanità. - Mercredi 13 mai 2015
**(à 14h30)**:**Henning Bruhn**(Université d'Ulm, Allemagne) : Erdős-Pósa property for long S-cycles

The classic theorem of Erdös and Pósa asserts that a graph has k disjoint cycles or a hitting set of at most O(k log k) vertices, that is, a set of vertices whose deletion renders the graph acyclic. Since its discovery in the 1960s the theorem has been extended and generalised in many ways. Two quite distinct directions are those of long cycles and S-cycles. For a fixed parameter l, call a cycle long if it contains at least l edges. Fiorini and Herinckx, building on the work of several others, proved that if a graph does not contain k disjoint long cycles then there is a hitting set of size O(kl log k). Pontecorvi and Wollan, on the other hand, treat S-cycles, those cycles that each contain a vertex of a fixed set S. They show, also building on previous work, that a graph contains k disjoint S-cycles or a hitting set of size O(k log k).

In this talk, I will outline how these two generalisations can be brought back together: a graph has a k long S-cycles or a hitting set of size O(kl log k). For each of the parameters separately this is best possible. We conjecture, though, that a size of O(k(l+log k)) should be sufficient. The talk is based on joint work with Felix Joos and Oliver Schaudt. - Mercredi 13 mai 2015
**(à 10h30)**:**Bruce Shepherd**(Université McGill, Montréal) : Single-Sink Confluent Flows

We are interested in the problem of routing demands from a set of terminals {s_1,...,s_k} to a single sink node t. The terminal s_i wishes to route d_i such flow. We ask for the extra restriction that any flow through a node v, must exit on a single edge. This so-called confluent flow problem was introduced to study the impact of next hop IP routing. In 2004 two main results were proved by Chen et al.

1. If a standard routing of these flows exists where the total load through any node is at most 1, then there is a confluent flow where the load at each node is at most log(n). (And this is basically best possible.)

2. If there is a standard flow, then there is a confluent flow that routes at least 1/3 of the demand.

We consider the case where each node v now comes with a capacity u(v). The problem becomes substantially more complex as we shall describe. A classical work of Lovasz can be viewed as a "first result" on confluent flows. He studies for a given k, when is it possible to find a balanced spanning tree rooted at a vertex t, ie., there are k subtrees hanging from t and they all have size in (-1+(n-1)/k,1+(n-1)/k). - Jeudi 30 avril 2015
**(à 14h30)**:**Eric Tannier**(LBBE, Lyon) : Genome rearrangement with variable intergene size distribution

Genome rearrangements are typically modeled by operations acting on permutations of genes with a uniform probability on every operation. I will question this uniformity hypothesis because it does not fit observations and it is not a stable distribution under a rearrangement process on genomes. Simulations show that usual computational or statistical estimators of rearrangement distance are not robust to this generalization. I will propose and test a new statistical estimator based on analogies with random graphs. It approximates the number of pairs of genes that are consecutive in two genomes to an additive sublinear factor, and it is adapted to any intergene size ditribution.

Joint work with Priscila Biller, Nicolas Lartillot, Laurent Gueguen, Carole Knibbe. - Vendredi 24 avril 2015
**(à 11h)**:**Luerbio Faria**(Université de Rio de Janeiro) : On the crossing number of the n-cube

The crossing number cr(G) of a graph G is the minimum number of edge crossings in a drawing of G in the plane. The rectilinear rcr(G) crossing number of G is the minimum number of crossings in a drawing of G in the plane where the edges are represented by straight line segments. The k-page crossing number cr_k(G) of G is the minimum number of crossings in a drawing of G into k semiplanes where the vertices of G belong to a common straight line bounding the semiplanes, and the edges are represented by semi-circles at the semiplanes. The n-cube graph has applications in parallel computing. In this talk we describe the results in the literature about these parameters on the n-cube graph. - Jeudi 26 mars 2015
**(à 14h30)**:**Jens Vygen**(Université de Bonn, Allemagne) : Reassembling Trees for the Traveling Salesman

Many recent approximation algorithms for different variants of the traveling salesman problem (asymmetric TSP, graph TSP, s-t-path TSP) exploit the well-known fact that a solution of the natural linear programming relaxation can be written as convex combination of spanning trees. The main argument then is that randomly sampling a tree from such a distribution and then completing the tree to a tour at minimum cost yields a better approximation guarantee than simply taking a minimum cost spanning tree (as in Christofides' algorithm). We argue that an additional step can help: reassembling the spanning trees before sampling. Exchanging two edges in a pair of spanning trees can improve their properties under certain conditions. We demonstrate the usefulness for the metric s-t-path TSP by devising a deterministic polynomial-time algorithm that improves on Sebo's previously best approximation ratio of 8/5. - Vendredi 20 mars
**(à 11h)**:**Nicolas Bousquet**(Université McGill, Montréal) : Coalition games on interaction graphs

We consider cooperative games where the viability of a coalition is determined by whether or not its members have the ability to communicate amongst themselves. This necessary condition for viability was proposed by Myerson and is modeled via an interaction graph; a coalition S of vertices is then viable if and only if the graph induced graph S is connected.

The non-emptiness of the core of a coalition game can be tested by a well-known covering LP. Moreover, the integrality gap of its dual packing LP defines exactly the multiplicative least-core and the relative cost of stability of the coalition game. This gap is upper bounded by the packing-covering ratio which is known to be at most the treewidth of the interaction graph plus one.

We examine the packing-covering ratio and integrality gaps of graphical coalition games in more detail. First we introduce a new graph parameter, called the vinewidth (a parameter derived from the treewidth), which characterizes the worst packing-covering ratio. Then we will show that this new parameter correctly evaluates both primal and dual integrality gaps.

Joint work with Zhentao Li and Adrian Vetta. - Jeudi 12 mars 2015
**(à 14h30)**:**Guilherme D. da Fonseca**(LIRMM, Montpellier) : Linear-Time Approximation Algorithms for Geometric Intersection Graphs

Numerous approximation algorithms for problems on geometric intersection graphs have been proposed in the literature, exhibiting a sharp trade-off between running times and approximation ratios. We introduce a method to obtain linear-time constant-factor approximation algorithms for such problems. To illustrate its applicability, we obtain results for three well-known optimization problems on two classes of geometric intersection graphs. Among such results, our method yields linear-time (4+epsilon)-approximations for the maximum-weight independent set and the minimum dominating set of unit disk graphs, thus bringing dramatic performance improvements when compared to previous algorithms that achieve the same approximation ratios.

Joint work with Vinicius G. Pereira de Sa and Celina M. H. de Figueiredo. - Jeudi 12 février 2015
**(à 14h30)**:**Diana Sasaki**(LAMSADE, Paris) : Perfect matchings versus maximum weight matchings: the ratio eta for some graph classes

Motivated by applications in computer graphics, where we seek to convert a triangu- lation into a quadrangulation, Brazil et al. studied the ratio eta between the maximum weight of a perfect matching and the maximum weight of a general matching. Due to the computer graphics application, their study focused mostly on cubic graphs. The parameter eta can be defined for any graph which admits a perfect matching. Therefore, it is natural to consider other graph classes such as grids and regular bipartite graphs. Surprisingly, the determination of eta is often non-trivial even for such restricted types of graphs. In this work, we determine the exact value of eta for all bipartite regular graphs, rectangular grids, bipartite cylindrical grids, and bipartite toroidal grids.

Joint work with Guilherme D. da Fonseca and Bernard Ries. - Jeudi 29 janvier 2015
**(à 14h30)**:**Aline Parreau**(LIRIS, Lyon) : Identifying codes and Vapnik-Chervonenkis dimension

An identifying code is a subset of vertices of a graph such that each vertex is uniquely determined by its closed neighbourhood within the identifying code. Using Vapnik-Chervonenkis dimension, we give a dicothomy result for identifying codes in classes of graphs closed under induced subgraphs: either a class has infinite VC-dimension and has arbitrarly large logarithmic identifying codes or it has finite VC dimension and an identifying code will always be of polynomial size. We also prove that in the first case, the problem of determining the smallest size of an identifying code is hard to approximate within a constant factor. In the second case, this problem can be polynomial-time approximable for some classes like interval graphs for which we provide a 6-approximation algorithm based on linear programming and hard to approximate in other classes. - Mardi 16 décembre 2014
**(à 14h30)**:**Benjamin Lévêque**(LIRMM, Montpellier) : Structure of Schnyder woods in higher genus

We propose a simple generalization of Schnyder woods to higher genus. We give a necessary and sufficient condition for an orientation to corresponds to such a Schnyder wood and we study the structure of these orientations. Unfortunately we are not able to prove the existence of these Schnyder woods in general but a proof for the toroidal case using the structure is proposed. - Jeudi 4 décembre 2014
**(à 14h30)**:**Aurélie Lagoutte**(LIP, ENS Lyon) : From extended formulations of polytopes to the Clique-Stable Set problem

he concept of extended formulations of polytopes was introduced by Yannakakis. A polytope P has a small extended formulation if it is the projection of a polytope Q lying in a higher dimensional space, with a small number of facets. In particular, he studies the stable set polytope and focus on the perfect graphs case, where it can be fully described with the clique constraints. He highlights a lower bound stated as a communication complexity problem: Alice is given a clique K, Bob is given a stable set S, and they have to decide via a non-deterministic protocol whether K intersects S or not. A certificate for the non-intersection is a bipartition of the vertices such that K is included in one side, and S is included in the other side. The goal is to find a Clique-Stable set separator, i.e. a set F of certificates which contains every useful bipartitions, and with |F|=poly(|V(G)|). The question is open in general, and Yannakakis question (i.e. on perfect graphs) is still also open.

We use different techniques to prove that such a polynomial Clique-Stable set separator exists in restricted classes of graphs: probabilistic arguments for random graphs, VC-dimension for graphs where a split graph H is forbidden, the Regularity Lemma and connections with the Erdos-Hajnal property for graphs with no induced path on k vertices and its complement, and decomposition theorem for perfect graphs with no balanced skew partition. - Jeudi 27 novembre 2014
**(à 14h30)**:**Hyung-Chan An**(EPFL, Lausanne) : LP-Based Algorithms for Capacitated Facility Location

Linear programming has played a key role in the study of algorithms for combinatorial optimization problems. In the field of approximation algorithms, this is well illustrated by the uncapacitated facility location problem. A variety of algorithmic methodologies, such as LP-rounding and primal-dual method, have been applied to and evolved from algorithms for this problem. Unfortunately, this collection of powerful algorithmic techniques had not yet been applicable to the more general capacitated facility location problem. In fact, all of the known algorithms with good performance guarantees were based on a single technique, local search, and no linear programming relaxation was known to efficiently approximate the problem.

In this paper, we present a linear programming relaxation with constant integrality gap for capacitated facility location. We demonstrate that the fundamental theories of multi-commodity flows and matchings provide key insights that lead to the strong relaxation. Our algorithmic proof of integrality gap is obtained by finally accessing the rich toolbox of LP-based methodologies: we present a constant factor approximation algorithm based on LP-rounding.

Joint work with Mohit Singh and Ola Svensson. - Mardi 25 novembre 2014
**(à 14h30)**:**Ola Svensson**(EPFL, Lausanne) : A Simple O(loglog(rank))-Competitive Algorithm for the Matroid Secretary Problem

The last decade has seen an increased interest in generalizations of the secretary problem, a classical online selection problem. These generalizations have numerous applications in mechanism design for settings involving the selling of a good (e.g. ads) to agents (e.g. page views) arriving online. The matroid secretary problem is one of the most well-studied variants. It is general enough to deal with complex settings and, at the same time, it is sufficiently restricted to admit strong algorithms. A famous conjecture states that there is in fact a O(1)-competitive algorithm for the matroid secretary problem. This is an online algorithm that, in expectation and up to a constant factor, performs as well as any offline algorithm with perfect information.

In this talk, we present a new method that improves on the previously best algorithm, in terms of simplicity and its competitive ratio. The main idea of our algorithm is to decompose the problem into a distribution over a simple type of matroid secretary problems which are easy to solve. We show that this leads to a O(loglog(rank))-competitive procedure.

This is joint work with Moran Feldman (EPFL) and Rico Zenklusen (ETHZ). - Jeudi 6 novembre 2014
**(à 14h30)**:**Marthe Bonamy**(LIRMM, Montpellier) : Algorithmes de pré-traitement de graphes

Étant donné un problème paramétré de graphes (partition des sommets en k stables, ou en une forêt et au plus k sommets, détection d'un ensemble dominant de taille k, etc), on s'intéresse à des algorithmes efficaces (i.e. de complexité polynomiale) pour simplifier une instance de ce problème. "Simplifier" peut avoir plusieurs sens selon le résultat désiré. Intuitivement, si on cherche à k-colorier un graphe G, et que ce graphe G contient un sommet de degré au plus k-1, on sait que ce sommet n'aura pas d'influence sur le résultat et peut donc être enlevé du graphe, sans qu'il y ait perte d'information. De même, si ce graphe G contient un sommet qui est adjacent à tous les autres, on sait d'ores et déjà qu'il devra recevoir une couleur distincte de toutes les autres apparaissant sur le graphe, et on peut simplement le supprimer du graphe et chercher à colorier le reste avec seulement k-1 couleurs.

Après avoir identifié un certain nombre de telles règles de simplification, on veut garantir l'efficacité de l'algorithme de pré-traitement correspondant, au moins sur une sous-classe de graphes (par exemple les graphes planaires). Selon le résultat voulu, on peut chercher à montrer que l'algorithme aboutit toujours au graphe vide, ce qui nous assure que la classe de graphes considérée admet toujours une solution (par exemple les graphes planaires pour le problème de 4-coloration). On peut plus généralement chercher à montrer que l'algorithme aboutit toujours à un graphe de taille bornée par une fonction de k, ce qui nous assure que, à k fixé, on peut en temps polynomial décider s'il y a une solution et en trouver une le cas échéant. Ce type de résultat est intéressant en particulier pour des problèmes que l'on sait NP-durs en général.

Ici, on s'intéresse au problème Feedback Vertex Set (combien de sommets doit-on enlever à un graphe pour obtenir une forêt ?) paramétré par la taille de solution (suffit-il d'enlever k sommets ?), dans le cadre des graphes planaires. On présente un algorithme de pré-traitement qui aboutit en temps linéaire à un graphe planaire de taille au plus 13k. Il s'agit d'une collaboration avec Lukasz Kowalik (Université de Varsovie) - Jeudi 23 octobre 2014
**(à 14h30)**:**Frédéric Maffray**(G-SCOP) : Coloration de certains graphes parfaits

Dans un graphe G une paire d'amis est une paire de sommets x et y tels que tous les chemins de x à y sans corde sont pairs. L'utilisation des paires d'amis a permis de fabriquer des algorithmes de coloration efficaces pour diverses classes de graphes parfaits. Dans ce contexte, une conjecture de Reed et Everett est encore ouverte. On présentera des résultats nouveaux dans cette direction, concernant les graphes sans C4. - Jeudi 2 octobre 2014
**(à 14h30)**:**François Margot**(Tepper School of Business, Carnegie Mellon) : Cut generation through binarization

Cutting planes (or cuts) are an essential component of state-of-the-art solvers for Mixed-Integer Linear Program (MILP). Among the many classes of cuts, split cuts are one of the most useful and essential. It is known that split cuts are able to close a large part of the integrality gap for usual benchmark instances, but this requires a large cpu time. Empirical split cut generation aims at generating fast the most useful split cuts. Advances in this direction usually focus on finding strong disjunctions, involving several variables. In this talk, our aim is to use combinations of simple split cuts, i.e. cuts obtained using disjunctions of the form x_j = 0 or x_j = 1 for a binary variable x_j to generate cuts that have similar strength as general split cuts.

Our cut generation scheme is based on two components. First, we use a binarization of general integer variables (i.e., replacing a general integer variable by a sum of binary variables), augmented by additional symmetry breaking constraints. This binarization was introduced by Roy for generating strong Lift-and-Project cuts. Second, we then use rank-2 simple-split cuts, i.e, cuts that can be obtained as a simple split cut for a polyhedron obtained by adding all simple split cuts for the binarization of the problem. We show that for a pure integer problem with two integer variables, these cuts are sufficient to obtain the integer hull of the problem, but that this does not generalize to problems in higher dimensions. We also give an algorithm to compute an approximation of the rank-2 simple split cut closure. We report empirical results on 22 benchmark instances showing that the bounds obtained compare favorably with those obtained with other approximate methods to compute the split closure or lattice-free cut closure. It gives a better bound than the split closure on 6 instances while it is weaker on 9 instances, for an average gap closed 3.8% smaller than the one for the split closure.

Joint work with Pierre Bonami, IBM ILOG CPLEX.