Abstracts (as of May 6, 2009):


Allan Borodin, U. Toronto, Canada

Simple algorithms for sequential k-independent graphs

The local ratio framework introduced by Bar Yehuda and Evan in 1981 provides a unified framework for many approximation algorithms. Recently, the local ratio technique has been used to provide efficient approximation algorithms for a number of packing problems. Motivated by these recent algorithms, Akcoglu, Aspnes, DasGupta and Kao suggested a new class of graphs extending the class of chordal graphs. Namely, "sequential k-independent graphs" generalize the concept of a perfect elimination ordering by allowing the ``local neighborhood'' of each vertex in the ordering to have independence number k. We show how this relatively new class of graphs relates to diffferent well known classes of graphs. For example, planar graphs are sequentially 3-independent as are translates of a convex object in Euclidean 2-space. Restricted to such graphs, we show how various graph problems can be efficiently approximated by conceptually simple combinatorial algorithms thereby unifying and extending a substantial number of previous results. Sequential k-independent graphs are another example of the more general concept of sequential elimimation graphs.

(joint work with Yuli Ye)


Daniel Delling, U. Karlsruhe, Germany

Multi-Modal Route Planning

Recent research on fast route planning algorithms focused either on road networks or on public transportation. However, on the long run, we are interested in planning routes in a multi-modal scenario: We start by car to reach the nearest train station, ride the train to the airport, fly to an airport near our destination and finally take a taxi. In other words, we need to incorporate public transportation into road networks. In this talk I will present a very recent approach to this problem including experimental results for transportation networks with up to 120 Mio edges.


Amos Fiat, Tel Aviv U., Israel

Private Coresets


Paolo G. Franciosa, U. Rome "La Sapienza", Italy

Computing graph spanners in small memory

Intuitively, a spanner of a graph is a subgraph that preserves approximate distances between all pairs of vertices. More formally, given $\alpha \geq 1$ and $\beta \geq 0$, an $(\alpha,\beta)$-spanner of a graph $G$ is a subgraph $S$ of $G$ such that for each pair of vertices the distance in $S$ is at most $\alpha$ times the distance in $G$ plus $\beta$. Graph spanners have applications in several areas, including communication networks, computational biology, computational geometry, distributed computing, and robotics. In this seminar, we will describe recent algorithms for computing spanners for massive graphs. In this case, we assume the both the graph and the spanner cannot be stored in internal memory, but data must be accessed on external memory. In particular, new algorithms for computing graph spanners in an extended data streaming model will be described.


Loukas Georgiadis, U. Western Macedonia, Greece

Frequency Dominators and Related Problems

We consider the problem of finding frequency dominators in a directed graph with a single source vertex and a single terminal vertex. A vertex x is a frequency dominator of a vertex y if and only if for each source-to-terminal path through y, the number of occurrences of x is at least equal to the number of occurrences of y. This problem was introduced in the context of dynamic program optimization, and previously an efficient algorithm to compute the frequency dominance relation in reducible graphs has been given. In this talk we describe how frequency dominators can be efficiently computed in general directed graphs. Specifically, we present an algorithm that computes a sparse (O(n)-space), implicit representation of the frequency dominance relation in O(m+n) time, where n is the number of vertices and m is the number of arcs of the input graph. Given this representation we can report all the frequency dominators of a vertex in time proportional to the output size, and answer queries of whether a vertex x is a frequency dominator of a vertex y in constant time. We also show that, given our representation of frequency dominance, we can partition the vertex set into regions in O(n) time, such that all vertices in the same region have equal number of appearances in any source-to-terminal path. The computation of regions has applications in program optimization and parallelization. Finally, we will describe a general framework that can express the above problems as reachability problems in graphs representing transitive binary relations, and for which an efficient reporting algorithm is required. We will present some preliminary results and discuss open problems.


Fabrizio Grandoni, U. Rome "Tor Vergata", Italy

Set Covering with Our Eyes Closed

Given a universe $U$ of $n$ elements and a weighted collection $S$ of $m$ subsets of $U$, the universal set cover problem is to a-priori map each element $u \in U$ to a set $S(u) \in S$ containing $u$, so that $X\subseteq U$ is covered by $S(X)=\cup_{u\in X}S(u)$. The aim is finding a mapping such that the cost of $S(X)$ is as close as possible to the optimal set-cover cost for $X$. (Such problems are also called oblivious or a-priori optimization problems.) Unfortunately, for every universal mapping, the cost of $S(X)$ can be $\Omega(\sqrt{n})$ times larger than optimal if the set $X$ is adversarially chosen. In this paper we study the performance on average, when $X$ is a set of randomly chosen elements from the universe: we show how to efficiently find a universal map whose expected cost is $O(\log mn)$ times the expected optimal cost. In fact, we give a slightly improved analysis and show that this is the best possible. We generalize these ideas to weighted set cover and show similar guarantees to (non-metric) facility location, where we have to balance the facility opening cost with the cost of connecting clients to the facilities. We show applications of our results to multi-cut and disc-covering problems, and show how all these universal mappings give us stochastic online algorithms with the same competitive factors.


Andrew Goldberg, MSR-SVC, USA

Highway Dimension and Provably Efficient Algorithms

Motivated by computing driving directions, very efficient shortest path algorithms have been developed for road networks in the last several years. These algorithms have a preprocessing stage and a query stage. The former outputs auxiliary data that is not much larger in size than the input graph. The latter uses this data to answer s-t queries in sub-linear time. However, this results are purely experimental. One of the difficulties of theoretical analysis is due to the fact that these are efficient only on certain kinds of graphs, including road networks. In this paper we introduce a model of road networks that allows us to prove good complexity bounds for some of the recent algorithms. In particular, we introduce the notion of the highway dimension of a graph and define road networks as having small highway dimension. The algorithms we are able to analyze are based on contraction hierarchies, highway hierarchies, reach, and transit nodes.

(joint work with Ittai Abraham, Amos Fiat, and Renato Werneck)


Torben Hagerup, U. Augsburg, Germany

An even simpler linear-time algorithm for verifying minimum spanning trees

A new linear-time algorithm is presented for the \emph{tree-path-maxima} problem of, given a tree $T$ with real edge weights and a list of pairs of distinct nodes in $T$, computing for each pair $(u,v)$ on the list a maximum-weight edge on the path in $T$ between $u$ and $v$. Linear-time algorithms for the tree-path-maxima problem were known previously, but the new algorithm may be considered significantly simpler than the earlier solutions. A linear-time algorithm for the tree-path-maxima problem implies a linear-time algorithm for the \emph{MST-verification} problem of determining whether a given spanning tree of a given undirected graph $G$ with real edge weights is a minimum-weight spanning tree of~$G$.


Monika Henzinger, EPFL, Switzerland

Algorithmic problems in web search and web research

Algorithmic problems on the web arise due to (i) the huge size of the web, (2) novel programming paradigms to deal with these huge problems, and (3) novel collaborations with other fields. In my presentation I will give examples of such algorithmic problems and sketch some first steps towards solving them.


John Iacono, Polytechnic U., NYC, USA

Find Union Split Search

The Find-Union-Split-Search problem is defined as maintaining a dynamic collection of disjoint sets consisting of elements from a totally ordered universe via five operations: Make-Set, Find, Union, Split, and Search. This problem generalizes several other problems such as Union-Find, Union-Find with Deunions, and variants of Union-Split-Find. A simple data structure for this problem, the FUSS-Tree, is presented and its amortized time complexity is analyzed in terms of n, the number of Make-Set operations. Specifically, whereas there were no previously known bounds for this problem better than the trivial worst-case bound of O(n) time per operation, it is shown that the FUSS-Tree achieves an amortized time bound of O(log^2 n) per operation. Furthermore, it is also shown that the amortized analysis of the FUSS-Tree is tight, matching Lai's lower bound of Omega(log^2 n) amortized time per operation for essentially the same algorithm.

(joint work with Ozgur Ozkan)


Riko Jacob, T. U. München, Germany

Efficient Self Stabilizing Data Structures

In the last years, Peer-to-Peer Overlay networks received a lot of attention, both in terms of implemented and used systems, and also from a more theoretical point of view. One fundamental task of such a network is to maintain a good communication topology. Natural goals that are known from static networks are a small degree and an easy local routing scheme that ensures short routes. Peer-to-Peer networks must not only allow nodes to easily join and leave the system, they must additionally be able to cope with node and link failures and nodes re-entering the system in an obsolete state. This leads to the notion of a self stabilizing algorithm, where the nodes are required to return to a desired topology by only performing local changes to the network. Then, starting from an arbitrary connected topology (with arbitrary internal state of the nodes), the network changes itself and quickly converges to the desired topology. In this context I will consider two concrete questions: It turns out that the notion of convergence time heavily depends on the precise model of computation. More concretely, for the setting where the desired topology is a path, I will examine variants of a natural algorithm in different models (established and new ones) and show that the running time varies between linear and cubic. Further, I will consider as the desired topology the so called skip-graph, which can be seen as a randomized variant of a hypercube, and show an algorithm that converges in O(log^2 n) time.

(joint work with Christian Scheideler, Stefan Schmid, Hanjo Tubig (TUM), and Andrea Richa (Arizona State University))


Haim Kaplan, Tel-Aviv U., Israel

Stabbing a Dynamic Set of Intervals

We describe optimal dynamic data structures for maintaining segments on the line, each with priority associated with it, such that we can find the interval of minimum priority containing a query point efficiently. For the case where each pair of intervals are either disjoint or one contains the other we give a particularly simple data structure. This special case is related to the problem of finding the longest prefix of a query string in a preprocessed collection of strings.

(joint work with P. Agarwal, L. Arge, M. Hershkoviz, E. Molad, K. Yi, and Bob Tarjan)


Alberto Marchetti Spaccamela, "Sapienza" U. Roma, Italy

Robust sequencing on a single machine

We consider scheduling to minimize the weighted sum of completion times on a single machine that may experience unexpected changes in processing speed or even full breakdowns. We design a polynomial time deterministic algorithm that finds a robust prefixed scheduling sequence with a solution value within 4+e times the value an optimal clairvoyant algorithm can achieve, knowing the disruptions in advance and even being allowed to interrupt jobs at any moment. A randomized version of this algorithm attains in expectation a ratio of e+ \epsilon w.r.t. a clairvoyant optimum. We show that such a ratio can never be achieved by any deterministic algorithm by proving that the price of robustness of any such algorithm is at least 1+\sqrt{3} > e. As a direct consequence of our results, the question whether a constant approximation algorithm exists for the problem with given machine unavailability periods is answered affirmatively. We complement this result by an FPTAS for two major special cases.

(joint work with N. Megow M. Skutella L. Stougie)


Ian Munro, U. Waterloo, Canada

An Efficient Algorithm for Partial Order Production

We consider the problem of partial order production: arrange the elements of an unknown totally ordered set T into a target partially ordered set S, by comparing a minimum number of pairs in T. Special cases of this problem include sorting by comparisons, selection, multiple selection, and heap construction. We give an algorithm performing ITLB+o(ITLB)+O(n) comparisons in the worst case. Here, n denotes the size of the ground sets, and ITLB denotes a natural information theoretic lower bound on the number of comparisons needed to produce the target poset. The overall complexity of our algorithm is polynomial. This answers a question of Yao (SICOMP, 1989). Our strategy is to extend the poset S to a weak order W whose corresponding information-theoretic lower bound is provably not much larger than that for S. Taking W instead of S as a target poset, we then solve the problem by applying a multiple selection algorithm that performs not much more than ITLB comparisons. S, a quantity that can be efficiently computed and provides a good estimate of ITLB.

(joint work with Jean Cardinal Samuel Fiorini, Gwenaël Joret, and Raphaël M. Jungers)


Rasmus Pagh, IT U. of Copenhagen, Denmark

Secondary Indexing in One Dimension: Beyond B-trees and Bitmap Indexes

Let Sigma be a finite, ordered alphabet, and let x=x_1x_2...x_n belong to Sigma^n. A secondary index for x answers alphabet range queries of the form: Given a range [a_l,a_r] return the set I_[a_l;a_r]={ i | x_i \in [a_l; a_r] }. Secondary indexes are heavily used in relational databases and scientific data analysis. It is well-known that the obvious solution, storing a dictionary for the set \bigcup_i {x_i} with a position set associated with each character, does not always give optimal query time. In this paper we give the first theoretically optimal data structure for the secondary indexing problem. In the I/O model, the amount of data read when answering a query is within a constant factor of the minimum space needed to represent I_[a_l;a_r], assuming that the size of internal memory is (|Sigma| lg n)^delta blocks, for some constant delta > 0. The space usage of the data structure is O(n lg |Sigma|) bits in the worst case, and we further show how to bound the size of the data structure in terms of the 0th order entropy of x. We show how to support updates achieving various time-space trade-offs. We also consider an approximate version of the basic secondary indexing problem where a query reports a superset of I_[a_l;a_r] containing each element not in I_[a_l;a_r] with probability at most varepsilon, where varepsilon > 0 is the false positive probability. For this problem the amount of data that needs to be read by the query algorithm is reduced to O(|I_[a_l;a_r]| lg(1/varepsilon)) bits.


Mihai Patrascu, IBM Almaden, USA

Counting Inversions Faster

We give an O(n sqrt{lg n}) algorithm for counting the number of inversions in a permutation. Previously, the best algorithm was based on the partial sums data structure of [Dietz 1989], obtaining O(n lg n /lglg n). It is known that this partial sums data structure is optimal in the online case, and it was conjectured that it would also be optimal offline. Our algorithm is based on external-memory ideas, and also implies improved bounds for offline range counting, the update time in the partial sums problem, and the construction time of range-reporting data structures.

(joint work with Timothy Chan)


Marco Pellegrini, IIT-CNR, Pisa, Italy

Extraction and Classification of Dense Implicit Communities in the Web Graph: Finding Dense Subgraph of the Web-Graph

The World Wide Web (WWW) is rapidly becoming important for society as a medium for sharing data, information and services, and there is a growing interest in tools for understanding collective behaviors and emerging phenomena in the WWW. In this paper we focus on the problem of searching and classifying communities in the web. Loosely speaking a community is a group of pages related to a common interest. More formally communities have been associated in the computer science literature with the existence of a locally dense sub-graph of the web-graph (where web pages are nodes and hyper-links are arcs of the web-graph). The core of our contribution is a new scalable algorithm for finding relatively dense subgraphs in massive graphs. We apply our algorithm on web-graphs built on three publicly available large crawls of the web (with raw sizes up to 120M nodes and 1G arcs). The effectiveness of our algorithm in finding dense subgraphs is demonstrated experimentally by embedding artificial communities in the web-graph and counting how many of these are blindly found. Effectiveness increases with the size and density of the communities: it is close to 100% for communities of a thirty nodes or more (even at low density). It is still about 80% even for communities of twenty nodes with density over 50% of the arcs present. At the lower extremes the algorithm catches 35% of dense communities made of ten nodes. We also develop some sufficient conditions for the detection of a community under some local graph models and not-too-restrictive hypotheses. We complete our Community Watch system by clustering the communities found in the web-graph into homogeneous groups by topic and labeling each group by representative keywords.

(joint work with Filippo Geraci and Yon Dourisboure)


Cynthia Phillips, Sandia National Laboratories, USA

Sensor Placement For Municipal Water Networks

We consider the problem of placing a limited number of sensors in a municipal water distribution network to minimize the impact over a given suite of contamination incidents. In its simplest form, the sensor placement problem is a p-median problem that has structure extremely amenable to exact and heuristic solution methods. We describe the solution of real-world instances using integer programming or local search or a Lagrangian method. The Lagrangian method is necessary for solution of large problems on small PCs. We summarize a number of other heuristic methods for effectively addressing issues such as sensor failures, robustness issues, and problem size/approximation quality tradeoffs. We present an open combinatorial problem whose solution would further improve space savings while maintaining approximation quality for the Lagrangian method. These algorithms are incorporated into the TEVA-SPOT toolkit, a software suite that the US Environmental Protection Agency has used and is using to design contamination warning systems for US municipal water systems.

(joint work with most of the TEVA-SPOT team)


Rajeev Raman, U. of Leicester, UK

Universal Succinct Representations of Trees

We consider the succinct representation of \emph{ordinal} and \emph{cardinal} trees on the RAM with logarithmic word size. Given a tree $T$, our representations support the following operations in $O(1)$ time: (i) $\mbox{{\tt BP-substring}}(i,b)$, which reports the substring of length $b$ bits ($b$ is at most the wordsize) beginning at position $i$ of the balanced parenthesis representation of $T$, (ii) $\mbox{{\tt DFUDS-substring}}(i,b)$, which does the same for the \emph{depth first unary degree sequence} representation, and (iii) a similar operation for tree-partition based representations of $T$. We give: - an asymptotically space-optimal $2n + o(n)$ bit representation of $n$-node ordinal trees that supports all the above operations with $b = \Theta(\log n)$, answering an open question from [He et al., ICALP'07]. - an asymptotically space-optimal $C(n,k) + o(n)$-bit representation of $k$-ary cardinal trees, that supports (with $b = \Theta(\sqrt{\log n})$) the operations (ii) and (iii) above, on the ordinal tree obtained by removing labels from the cardinal tree, as well as the usual label-based operations. As a result, we obtain a fully-functional cardinal tree representation with the above space complexity. This answers an open question from [Raman et al, SODA'02]. Our new representations are able to simultaneously \emph{emulate} the BP, DFUDS and partitioned representations using a single instance of the data structure, and thus aim towards \emph{universality}. They not only support the union of all the ordinal tree operations supported by these representations, but will also automatically inherit any new operations supported by these representations in the future.

(joint work with Arash Farzan and Srinivasa Rao Satti)


Liam Roditty, Bar-Ilan U., Israel

Fault-Tolerant Spanners for General Graphs

The paper concerns graph spanners that are resistant to vertex or edge failures. Given a weighted undirected $n$-vertex graph $G=(V,E)$ and an integer $k \geq 1$, the subgraph $H=(V,E')$, $E'\subseteq E$, is a {\em spanner} of stretch $k$ (or, a $k$-spanner) of $G$ if $\delta_H(u,v) \leq k\cdot \delta_G(u,v)$ for every $u,v \in V$, where $\delta_{G'}(u,v)$ denotes the distance between $u$ and $v$ in $G'$. Graph spanners were extensively studied since their introduction over two decades ago. It is known how to efficiently construct a $(2k-1)$-spanner of size $O(n^{1+1/k})$, and this size-stretch tradeoff is conjectured to be tight. The notion of {\em fault tolerant} spanners was introduced a decade ago in the geometric setting [Levcopoulos et al., STOC'98]. A subgraph $H$ is an $f$-vertex fault tolerant $k$-spanner of the graph $G$ if for any set $F\subseteq V$ of size at most $f$ and any pair of vertices $u,v \in V\setminus F$, the distances in $H$ satisfy $\delta_{H\setminus F}(u,v) \leq k\cdot \delta_{G\setminus F}(u,v)$. Levcopoulos et al. presented an efficient algorithm that constructs an $f$-vertex fault tolerant \emph{geometric} $(1+\epsilon)$-spanner, that is, given a set $S$ of $n$ points in $\mathbb{R}^d$, their algorithm finds a sparse graph $H$ such that for every set $F\subseteq S$ of size $f$ and any pair of points $u,v \in S\setminus F$ it satisfies that $\delta_{H\setminus F}(u,v) \leq (1+\epsilon) |uv|$, where $|uv|$ is the Euclidean distance between $u$ and $v$. A fault tolerant geometric spanner with optimal maximum degree and total weight was presented in [Czumaj and Zhao, SoCG'03]. This paper also raised as an open problem the question whether it is possible to obtain a fault tolerant spanner for an arbitrary undirected weighted graph. The current paper answers this question in the affirmative, presenting an $f$-vertex fault tolerant $(2k-1)$-spanner whose size is $O(f^3 k^{f+1} \cdot n^{1+1/k}\log^{1-1/k}n)$. Interestingly, the stretch of the spanner remains unchanged while the size of the spanner only increases by a factor that depends on the stretch $k$, on the number of potential faults $f$, and on logarithmic terms in $n$. In addition, we consider the simpler setting of $f$-{\em edge} fault tolerant spanners (defined analogously). We present an $f$-edge fault tolerant $2k-1$ spanner with edge set of size $O(f\cdot n^{1+1/k})$ (only $f$ times larger than standard spanners). For both edge and vertex faults, our results, are shown to hold when the given graph $G$ is weighted.

(joint work with David Peleg and Michael Langberg)


Jared Saia, U. New Mexico, USA

Fear in Mediation: Exploiting the "Windfall of Malice"

The presence of adversarial players in games can, somewhat surprisingly, actually improve the social welfare. This phenomena has been called the "windfall of malice". In this talk, we consider whether it is possible to achieve the windfall of malice, even in the case where there are no adversarial players in a game. Surprisingly, we are able to show, that for a well-studied virus inoculation game, a mediator based on the windfall of malice idea can achieve a social optimum that is asymptotically optimal, even without the presence of adversarial players. To better understand the applicability of our technique of using a mediator based on the windfall of malice, we also describe the following two results. First, we give an important necessary condition that must hold for any game in order for any mediator to achieve a social welfare better than the best Nash equilibria. Second, we study how many random bits are required by a mediator that achieves the optimal social welfare in anonymous games.


Robert E. Tarjan, Princeton U. and HP, USA

Rank-Pairing Heaps

I shall describe the "rank-pairing heap," an implementation of heaps (priority queues) that combines the asymptotic efficiency of Fibonacci heaps with much of the simplicity of pairing heaps. Unlike all other heap implementations that match the bounds of Fibonacci heaps, our structure needs only one cut and no other structural changes per key decrease; the trees representing the heap can evolve to have arbitrary structure. Although the data structure is simple, its analysis is not. Our initial experiments indicate that rank-pairing heaps perform almost as well as pairing heaps on typical operation sequences and better on worst-case sequences.

(joint work with Bernhard Haeupler and Siddhartha Sen).


Jan Arne Telle, University of Bergen, Norway

Width parameters of graphs

Various parameters of graphs, like tree-width, clique-width and rank-width, have applications in the design of efficient graph algorithms. In this talk we discuss such 'width' parameters of graphs, with focus on the recently introduced boolean-width. While rank-width is based on the number of GF[2]-sums (1+1=0) of rows of adjacency matrices, boolean-width is based on the number of boolean-sums (1+1=1). When solving certain graph optimization problems by divide-and-conquer the number of unions of neighborhoods across cuts in the graph becomes a runtime bottleneck. These cases call for a decomposition of the graph minimizing boolean-width.

(joint work with Binh-Minh Bui-Xuan and Martin Vatshelle)


Renato Werneck, MSR-SVC, USA

Fast Local Search for Steiner Trees in Graphs

We present efficient algorithms that implement four local searches for the Steiner problem in graphs: vertex insertion, vertex elimination, key-path exchange, and key-vertex elimination. In each case, we show how to find an improving neighboring solution (or prove that none exists) in O(m log n) time on graphs with n vertices and m edges. Many of the techniques and data structures we use are interesting in their own right. Besides the theoretical interest, our results have practical impact: these local searches have been shown to find high-quality solutions in practice, but high running times limited their applicability.

(joint work with Eduardo Uchoa)


Sue Whitesides, U. Victoria, Canada

Putting Theory into Practice in Geometric Algorithms and Data Structures

We review three experiences with testing the practicality of theoretical results in geometric contexts. In particular, experiments with robot localization, with the 3D visibility complex, and with layerd graph drawing will be presented.


Udi Wieder, MSR-SVC, USA

Another look at Balanced Allocations

Say we place m balls sequentially into n bins, where each ball is placed by randomly selecting d bins and placing it in the least loaded of them. What is the load of the maximum bin when m>>n? It is well known that when d=1 the maximum load is m/n + \tildeO(sqrt(m/n)), whereas when d>=2 the maximum load is m/n + loglog n. Thus when more than one bin is sampled, the gap between maximum and average does not increase with the number of balls. We investigate a larger family of placement schemes. We focus on the case where each time a ball is placed, with probability half we use d=1 and with probability half we use d=2. We show that the maximum load for this process is m/n + log n. Thus, surprisingly, even though half the balls are thrown using a one-choice scheme, the gap between maximum load and average load does not increase with the number of balls. Along the way we develop new proof techniques and show general conditions under which a placement scheme outputs a balanced allocation.

(joint work with Yuval Peres and Kunal Talwar)


David Williamson, Cornell University, USA

The Rank Aggregation Problem

The rank aggregation problem was introduced by Dwork, Kumar, Naor, and Sivakumar (WWW 2001) in the context of finding good rankings of web pages by drawing on multiple input rankings from various search engines. The work draws on earlier work in the social choice literature on combining voter preferences specified by ranked alternatives. Dwork et al. propose finding a ranking that minimizes the sum of its Kendall distance to the input rankings; this can be viewed as a weighted feedback arc set problem. I will give an overview of the rank aggregation problem and some of its applications, including an application to intranet search (WWW 2003). I will also cover recent work done on finding good approximation algorithms for the problem. In particular, Ailon, Charikar, and Newman (STOC 2005) introduce a randomized ``pivoting'' algorithm for rank aggregation and related problems; recent work has extended this to deterministic pivoting algorithms for constrained versions of the problem (van Zuylen, Hegde, Jain and Williamson, SODA 2007, van Zuylen and Williamson 2007) and has yielded a polynomial-time approximation scheme for the problem (Kenyon-Mathieu and Schudy, STOC 2007). Recent experimental work of Schalekamp and van Zuylen has given us a good sense of the tradeoffs of these various algorithms in practice.


Christos Zaroliagis, U. Patras, Greece

Incentive-Compatible Robust Line Planning

The problem of robust line planning requests for a set of origin-destination paths (lines) along with their frequencies in an underlying railway network infrastructure, which are robust to fluctuations of real-time parameters of the solution. In this work, we investigate a variant of robust line planning stemming from recent regulations in the railway sector that introduce competition and free railway markets, and set up a new application scenario: there is a (potentially large) number of line operators that have their lines fixed and operate as competing entities struggling to exploit the underlying network infrastructure via frequency requests, while the management of the infrastructure itself remains the responsibility of a single (typically governmental) entity, the network operator. The line operators are typically unwilling to reveal their true incentives. Nevertheless, the network operator would like to ensure a fair (or, socially optimal) usage of the infrastructure, e.g., by maximizing the (unknown to him) aggregate incentives of the line operators. We show that this can be accomplished in certain situations via a (possibly anonymous) incentive-compatible pricing scheme for the usage of the shared resources, that is robust against the unknown incentives and the changes in the demands of the entities. This brings up a new notion of robustness, which we call incentive-compatible robustness, that considers as robustness of the system its tolerance to the entities' unknown incentives and elasticity of demands, aiming at an eventual stabilization to an equilibrium point that is as close as possible to the social optimum.


Li Zhang, Microsoft Research, USA

Proportional response dynamics and market equilibrium

Proportional response dynamics models a utility based user behavior by which one responds to the others proportional to the utility he previously receives from them. This natural behavior, despite its simplicity, can be shown to lead to the market equilibrium in two families of economies, namely the Fisher market with constant elasticity of substitution (CES) utilities and a general exchange economy that models peer-to-peer file downloading. It is to our knowledge the first such dynamics that converges to the market equilibrium and is not a variant of the tatonnement process. In addition, when the utility functions are strictly concave, it converges rapidly, with a running time comparable to that of the existing optimization based methods. The proof uses Eisenberg-Gale characterization of the market equilibrium and makes connection to the matrix scaling process.


Uri Zwick, Tel Aviv U., Israel

Soft heaps simplified

Chazelle (JACM 47(6), 2000) devised an approximate meldable priority queue data structure, called _Soft Heaps_, and used it to obtain the fastest known deterministic comparison-based algorithm for computing minimum spanning trees, as well as some new algorithms for selection and approximate sorting problems. If n elements are inserted into a collection of soft heaps, then up to \eps n of the elements still contained in these heaps, for a given _error parameter_ \eps, may be _corrupted_, i.e., have their keys artificially increased. In exchange for allowing these corruptions, each soft heap operation is performed in O(\log 1/\eps}) amortized time. We present a simplified implementation and analysis of soft heaps.