Dutch Ami Day on Computational Game Theory

Dutch Ami Day on Computational Game Theory


11:00 - 11:45: Hans Peters

We consider several related set extensions of the core and the anticore of games with transferable utility. An efficient allocation is undominated if it cannot be improved, in a specific way, by sidepayments changing the allocation or the game. The set of all such allocations is called the undominated set, and we show that it consists of finitely many polytopes with a core-like structure. One of these polytopes is the L1-center, consisting of all efficient allocations that minimize the sum of the absolute values of the excesses. The excess Pareto optimal set contains the allocations that are Pareto optimal in the set obtained by ordering the sums of the absolute values of the excesses of coalitions and the absolute values of the excesses of their complements. The L1-center is contained in the excess Pareto optimal set, which in turn is contained in the undominated set. For three-person games all these sets coincide. These three sets also coincide with the core for balanced games and with the anticore for antibalanced games. We study properties of these sets and provide characterizations in terms of balanced collections of coalitions. We also propose a single-valued selection from the excess Pareto optimal set, the min-prenucleolus, which is defined as the prenucleolus of the minimum of a game and its dual.


11:45 - 12:10: Pascal Lenzner

Network Creation Games attempt to model the creation and evolution of complex networks, which are build in a decentralized way by selfish agents. Each selfish agent tries to balance the two contradicting objectives of maximizing the quality of network usage and minimizing the number of (costly) links that have to be created and maintained to connect to the network. Clearly, the Internet is a prominent example of such a network and it can be considered as an equilibrium state of a Network Creation Game. Here, the agents are Internet Service Providers and links connect different autonomous systems.

I will present insights on the impact of greediness in the Network Creation Games introduced by Fabrikant et al. [PODC'03]. Based on the ideas that ISPs prefer smooth adaptations over radically re-designing their infrastructure and that computing a best possible strategy is NP-hard, I will introduce and analyze a very natural solution concept, called the Greedy Equilibrium.

It turns out that naive greedy play leads to remarkably stable networks in the SUM-version, where agents attempt to minimize their average distance to all other agents. For the MAX-version, where agents attempt to minimize their maximum distance, I will present positive results for tree-networks and a strong negative result for general networks.


13:10 - 13:35: Gergely Csapó

We study the problem of funding the profit-maximizing mechanism for a monopolistic provider of a single, non-excludable public good. This problem has been well studied for the case when agents' types are independently distributed, but the literature is almost silent about the case of general joint distributions. We investigate the problem from an automated mechanism design perspective, meaning that we want to understand the algorithmic complexity of funding the optimal mechanism when we are given a finite set of type profiles and their distribution.

We show that the optimal deterministic, dominant strategy incentive compatible, ex-post individual rational mechanism can be computed in polynomial time by reducing the problem to finding a maximal weight closure in a directed graph. Node weights in the graph correspond to conditional virtual values. When valuations are independently distributed, the constructed mechanism is also optimal among all Bayes-Nash implementable and ex-interim individual rational mechanisms. In contrast, for dependent valuations strictly higher profit can be achieved if one allows for ex-interim individual rationality. By invoking techniques due to Crémer and McLean, we show that optimal deterministic, ex-interim individual rational, Bayes-Nash implementable or dominant strategy implementable mechanisms still can be found in polynomial time if the joint distribution of types satisfies certain regularity conditions.


13:35 - 14:00: Sunil Simon

Using game-theoretic concepts, we study the consequences of product adoption by agents who form a social network. We use the threshold model of social networks in which the nodes influenced by their neighbours can adopt one out of several alternatives, and associate with each social network a strategic game between the agents. Like certain classes of potential games, these games exhibit the "join the crowd" property where the payoff of each player weakly increases if more players choose the same strategy. However, such games may have no Nash equilibrium and determining the existence of a Nash equilibrium is NP-complete. The situation changes when the underlying graph of the social network is a DAG, a simple cycle, or has no source nodes. For these three classes we determine the complexity of establishing whether a Nash equilibrium exists.


14:00 - 14:25: Bart de Keijzer

We consider a variant of congestion games where every player i expresses for each resource e and player j a positive externality, i.e., a value for being on e together with player j. Rather than adopting a game-theoretic perspective, we take an optimization point of view and consider the problem of optimizing the social welfare. We show that this problem is NP-hard even for very special cases, notably also for the case where the players' utility functions for each resource are affine (contrasting with the tractable case of linear functions). We derive a 2-approximation algorithm by rounding an optimal solution of a natural LP formulation of the problem. Our rounding procedure is sophisticated because it needs to take care of the dependencies between the players resulting from the pairwise externalities. We also show that this is essentially best possible by showing that the integrality gap of the LP is close to 2.

Small adaptations of our rounding approach enable us to derive approximation algorithms for several generalizations of the problem. Most notably, we obtain an (r+1)-approximation when every player may express for each resource externalities on player sets of size r. Further, we derive a 2-approximation when the strategy sets of the players are restricted and a (3/2)-approximation when these sets are of size 2.

Joint work with Guido Schaefer.


14:50 - 15:35: Berthold Vöcking

We present a randomized, polynomial-time approximation scheme for multi-unit auctions. Our mechanism is truthful in the universal sense, i.e., a distribution over deterministically truthful mechanisms. Previously known approximation schemes were truthful in expectation which is a weaker notion of truthfulness assuming risk neutral bidders. The existence of a universally truthful approximation scheme was questioned by previous work showing that multi-unit auctions with certain technical restrictions on their output do not admit a polynomial-time, universally truthful mechanism with approximation factor better than two.

Our new mechanism employs VCG payments in a non-standard way: The deterministic mechanisms underlying our universally truthful approximation scheme are not maximal in range and do not belong to the class of affine maximizers which, on a first view, seems to contradict previous characterizations of VCG-based mechanisms. Instead, each of these deterministic mechanisms is composed of a collection of affine maximizers, one for each bidder which yields a subjective variant of VCG.


15:35 - 16:00: Tobias Harks

Reducing traffic congestion via toll pricing has been a central topic in the operations research and transportation literature and, recently, it has been implemented in several cities all over the world. Since in practice it is not feasible to impose tolls on every edge of a given traffic network, we study the resulting mathematical problem of computing tolls on a predefined subset of edges of the network so as to minimize the total travel time of the induced equilibrium flow. We first present an analytical study for the special case of parallel edge networks highlighting the intrinsic complexity and non-convexity of the resulting optimization problem. We then present algorithms for general networks for which we systematically test the solution quality for large-scale network instances.


16:15 - 16:40: Diederik Roijers

In cooperative multi-agent systems the number of possible joint actions grows exponentially in the number of agents. Consequently, exploiting loose couplings, as expressed in graphical models, are key to computationally efficient decision making in such settings. However, existing methods such as variable elimination for solving such models assume there is a only a single objective. In contrast, many real-world problems are characterized by the presence of multiple objectives to which the solution is not a single action but the set of actions optimal for all trade-offs between the objectives. In this paper, we propose a new method for multi-objective multi-agent graphical games called multi-objective variable elimination which replaces the maximization performed by variable elimination with an operator that prunes away dominated solutions. We prove the correctness of the proposed method and analyze its complexity. We also present an empirical study that shows that this method can tackle multi-objective problems much faster than those that do not exploit loose couplings, and analyzes its scalability with respect to multiple factors such as the number of objectives, the number of agents, the induced width of the graphical model, and the size of the solution set.


16:40 - 17:05: Bart Smeulders

We provide results on the computational complexity of goodness of fit measures (i.e. Afriat's efficiency index, Varian's efficiency vector-index and the Houtman-Maks index) associated with several revealed preference axioms (i.e. WARP, SARP, GARP and HARP). These results explain the computational difficulties that have been observed in literature when computing these indices. Our NP-Hardness results are obtained by reductions from the independent set problem. We also show that this reduction can be used to prove that no constant factor approximations algorithm exists for Varian's index, nor for Houtman-Maks' index (unless P = NP). Finally, we give an exact polynomial time algorithm for finding Afriat's efficiency index.


17:05 - 17:30: Ruben Hoeksma

We consider optimal mechanism design for a sequencing problem with n jobs which require a compensation payment for waiting. The jobs' processing requirements as well as unit costs for waiting are private data. Given public priors for this private data, we seek to find an optimal mechanism, that is, a scheduling rule and incentive compatible payments that minimize the total expected payments to the jobs. Here, incentive compatible refers to a Bayes-Nash equilibrium. While the problem can be efficiently solved when jobs have single dimensional private data along the lines of a seminal paper by Myerson, we here address the problem with two dimensional private data. We show that the problem can be solved in polynomial time by linear programming techniques. Our implementation is randomized and truthful in expectation. The main steps are a compactification of an exponential size linear program, and a combinatorial algorithm to compute from feasible interim schedules a convex combination of at most n deterministic schedules. In addition, in computational experiments with random instances, we generate some more theoretical insights.