Researchers of DMMP work in several research areas with applications in different fields, like health care, traffic, energy, ICT, games and auctions, logistics and timetabling. The overview of previously completed theses gives an indication of what kind of topics for a final assignment are possible. We collaborate with different external partners outside of the UT for internships and final assignments, and to name only a few, that could be DAT.mobility, ORTEC, Thales, NS, CQM, and many more. Also foreign Universities are an option. The list below in therefore indicative, and shows a few of the open problems to work on. If you are interested in assignments for an internship or master's thesis, please contact any member of the group.

List of ongoing and completed master theses.

The following list of potential MSc topics is **always under construction** and will be updated regularly. To get more information, you can always contact any of the staff.

**MScThesis**

Mathematical optimization plays a large role in the coordination of smart residential energy systems. In these systems, on the one hand, there is an increased infeed of renewable energy (solar, wind). On the other hand, the rising number of electric vehicles, heat pumps, and other “smart devices” leads to an increase of the residential energy demand. This increase is so extreme that it can cause blackouts if not managed properly.

In order to reduce the risk of blackouts, we can reduce peak consumption or production by exploiting the flexibility of smart devices. This is where mathematical optimization comes into play: one can formulate the problem of peak minimization as a mathematical program that can be solved using either standard solvers or, preferably, efficient tailored algorithms.

Within this broad topic, there are several possibilities for MSc theses. Following are three examples of research projects:

- In our optimization models, we make many assumptions on the properties of devices in order to obtain efficient (i.e., polynomial-time solvable or convex) models. Examples of this are continuous charging behavior, minimal losses, and perfect state-of-charge measurements. To what extend can these assumptions be relaxed while maintaining the efficiency?
- The (electricity) distribution grid has a hierarchical network structure with different levels. At the bottom of the hierarchy are consumption devices and local energy production devices (e.g., solar panels), whereas higher in the hierarchy are larger grid assets such as transformers. Due to this level structure, peak reduction on all levels is required in order to maintain a proper grid operation. When only two levels are considered, this can be done relatively efficiently. However, as soon as more levels are included, the efficiency decreases drastically due to the increased dimensionality. How can we increase the scalability of peak reduction strategies applied to multiple network levels?
- One approach to tackle peak consumption or production is by introducing a clever market design. This market structure should be such that everyone is incentivized to participate in reducing the peaks. Existing literature makes extensive use of tools from the area of (Algorithmic) Game Theory. Up to now these approaches differ in many aspects, such as the variety of smart device types, their stability, and their running time. Is there a way to combine the advantages of these approaches or generalize the setting even further (for instance, include uncertainty)?

For more information, please contact Johann Hurink.

#### MScThesis

The hyperball algorithm, based on probabilistic counters, turned out extremely effective and efficient in computing graph distances in gigantic networks, such as Facebook. In this project we will investigate what other important characteristics of networks can be estimated using this powerful method.

In particular, we focus on the important problem of minimizing conductance: a measure for how well-connected a group of vertices is to the rest of the network. Conductance has many important applications: it gives information on how fast a rumor spreads through a network and it can help in finding ‘communities’, sets of densely connected vertices within a graph. These communities usually have low conductance, and finding good communities then translates to finding sets of low conductance.

However, finding a set of minimum conductance in a graph is computationally intractable. We will investigate whether we can efficiently find good candidates for sets of minimal conductance using hyperball-type algorithms.

For more information please contact Nelly Litvak or Clara Stegehuis

**MScThesis **

Future mobile networks, 5G and beyond, will make use of proactive caching in the base stations. This means that based on predicted user behavior, content is placed in base stations before it is actually requested. The advantage is that delivery latencies are reduced, since once a request for content comes it does not need to be fetched from the server. Moreover, it enables load-balancing in the backhaul of the network, updating the content of the caches only if the network load is low. Since storage capacity in a base station is limited we cannot cache all content. At the same time, since most locations are covered by multiple base stations we want to prevent overlap in the cached content between those base stations. The mathematical challenge that arises is to decide which content to cache in which base station.

The resulting optimization problem is not convex and cannot be tackled straightforwardly. In this project you will work on a distributed optimization algorithm that can be interpreted as a potential game played between the base stations in the network. A basic version of such an algorithm has been proposed in the literature, but the underlying theory needs to be further developed. In this project you will develop this theory, with the purpose of providing performance guarantees on the algorithm (convergence rate, quality of solution,…) as well as refinements/improvements on the algorithm itself.

You will be working with algorithmic game theory and in particular with the class of (matroid) congestion games. In such a game we assume that the players behave selfish and are interested to optimize their personal utility only. Game theory predicts that such a behavior leads to an equilibrium solution which has been studied extensively. Interestingly, the distributed algorithm outlined above can be described as a game such that the solution of the algorithms corresponds to an equilibrium of the game. You will apply and refine game theoretic methods to analyze the quality of equilibria and the convergence rate.

Knowledge of 5G and/or caching is not required before starting this project.

Please contact Alexander Skopalik or Jasper Goseling.

#### Internship & MSc thesis

In a Hot Strip Mill thick steel slabs are hot rolled out to long strips having a thickness range of 2 to 25 mm. After hot rolling the strip needs to be cooled down on the runout Table (ROT) from about 900°C to about 500°C or lower after which the strip is coiled. The new market developments in hot rolled products are mainly in the advanced high-strength steels. These products require a precise and highly flexible control of the cooling path on the runout table. Not only the final temperature (coiling temperature) , but also the cooling rates and intermediate temperatures are important for achieving the right mechanical properties of the steel.

Before a steel strip enters the ROT, there is limited time available for the controller to determine the optimum setup . As there are many variables involved (the settings of each individual bank, material properties, velocity) of which some variables are discrete (e.g. the valve settings: 0%, 70% or 100% open) it is very complex to find the minimum of the objective function within the limited available time: we have about 6 seconds to find the optimum out of possible control settings. There are various algorithms available, however, many of them are not suitable to find the global minimum (they might find a local minimum as optimum) and/or are not fast enough to be useful. To find a suitable solution, the method must be able to solve a non-convex (having both local minima and a global minimum), non-linear, and discrete problem.

This is a project with R&D at Tata Steel (The Netherlands. For more details and a more detailed problem desciption, please contact Johann Hurink or Marc Uetz.

#### MSc thesis

Congestion games are a fundamental model in optimization and game theory, with applications e.g. in traffic routing. The price of stability is a game theoretic concept that relates the quality of the best Nash equilibrium to that of an optimal solution. It is the "smaller brother" of the well known price of anarchy as defined by Koutsoupias and Papadimitriou in 2001, and has been first defined by Anshelevich et al. in 2004. The basic question that is asked here is if and how the combinatorial structure of the strategy spaces of players influences the quality of the possible equilibria. In that respect, a recent progress was made for uniform matroids and the price of anarchy, which equals approximately 1.35188. The conjecture is that the price of stability for that (and maybe even for more general models) equals 4/3. The proof of this conjecture is the topic of this project. Background literature is a paper by de Jong, Klimm and Uetz on "Efficiency of Equilibria of Uniform Matroid Congestion Games" as well as the more recent paper "The asymptotic price of anarchy for k-uniform congestion games " by de Jong, Kern, Steenhuisen and Uetz. Both papers are available upon request.

For further questions, contact Jasper de Jong or Marc Uetz.

#### Master's thesis

In a recent paper (de Jong and Uetz, https://arxiv.org/abs/1709.10289) we have analyzed the quality of several types of equilibria for so-called set packing and throughput scheduling games. In that model, players subsequently select items to maximize the total value of the selected items, yet each player is restricted in the feasible subsets she can choose. The results are bounds on the quality of Nash and other game theoretic equilibria.

One of the distinguishing features of that model is that no item can be chosen by more than one player. That is a natural assumption in sequential games, but appears somewhat artificial when considering single-shot games.

The question that is to be analyzed in this MSc project is what happens when that assumption is relaxed? First, what type of models adequately model the situation that several players choose one and the same item? And what are the consequences for the resulting equilibria? What is the price of anarchy for pure and mixed Nash equilibria for such a model?

For more information, please contact Marc Uetz.

#### MSc thesis

In a series of recent publications, several researchers have analyzed sequential games and subgame perfect equilibria in order to circumvent the sometimes bad quality of Nash equilibria. Specifically, de Jong and Uetz (2015) have done that for congestion games with two or three players, showing that the sequential price of anarchy equals 1.5 and 1039/488, respectively. Subsequently, Correa, de Jong, de Keijzer and Uetz (2016) have considered network routing games and showed that -surprisingly- the sequential price of anarchy for games with n players can even be unbounded (while the price of anarchy is only 2.5). All these results are for pure strategy Nash and subgame perfect equilibria. One of the open questions is what happens if we consider mixed strategies, or settings in which the demand of a player is splittable. As a starting point, one can consider games with two or three players... The underlying research papers are available upon request.

For further information, contact Jasper de Jong or Marc Uetz.

**MSc thesis**

For many optimization problems, finding optimal solutions is prohibitive because the problems are NP-hard. This often holds even in the natural case, where the instances of the optimization problem consists of points in the Euclidean plane. In order to still be able to solve these problems, heuristics have been developed in order to find close-to-optimal in reasonable time. While many such heuristics show a remarkable performance in practice, their theoretical performance is poor and fails to explain practical observations.

Smoothed analysis is a relatively new paradigm in the analysis of algorithms that aims at explaining the performance of such heuristics for which there is a gap between theoretical analysis and practical observation.

Recently, Bläser et al. (Smoothed Analysis of Partitioning Algorithms for Euclidean Functionals, Algorithmica, to appear) have developed a framework to analyze so-called partitioning heuristics for optimization problems embedded in the Euclidean plane. The goal of this thesis is to generalize this framework to higher dimensions and to apply it to analyze further heuristics for Euclidean optimization problems.

For more information, please contact Bodo Manthey.

#### MSc thesis

In Game Theory we learn that the existence of Nash equlilibria (Nash's Theorem) follows from Brouwer's Fixed Point Theorem. How about the converse? The answer depends on which version of Nash's Theorem we take, but details must still be clarified. In addition: literature study and overview to see what is known w.r.t. complexity of the two problems (computing fixed points vs. computing Nash equlibria).

Please contact Walter Kern.

#### MScThesis

Let *N* be a set of *n*=|*N*| players. A *simple* game is defined by a partition of the power set of *N* into two non-empty sets *L* and *W*, *W* the set of *winning coalitions*, and L the set of *losing coalitions, *such that *L* is closed under taking subsets and hence *W* is closed under taking supersets. Most well-known are *weighted voting games*, where each player *i* in *N* has an associated nonnegative *weight* *p*(*i*) and winning coalitions *W *are defined by *p*(*W*) being at least equal to 1. To check whether a simple game is a weighted voting game, one is led to consider

For weighted voting games, the min max value is less than 1. In general this ratio may be seen as measuring the distance of a simple game to weighted voting games. For general simple games, a conjecture states that the optimum is at most *n*/4.

Assignment: Study particular classes of simple games or try to prove (approximately) the conjecture. Please contact Walter Kern.

#### MScThesis

The traveling tournament problem is a sports timetabling problem that abstracts two issues in creating timetables: home/away patterns of teams and their traveling. The only compact integer-programming model used in the literature requires *O(n ^{4})* variables and constraints, where

*n*is the number of teams. Moreover, the LP relaxation is very weak.

You will implement the compact model and another one of size *O(n ^{3})* in order to compare their strengths and the resulting solver performance. Moreover, you will investigate computationally, whether the weakness of the LP relaxation comes from the scheduling part, the routing part or the combination of both. A final objective is to strengthen the model(s).

Knowledge of Python is required.

For more information, please contact Matthias Walter.

#### MScThesis

Strong branching is a technique used in mixed integer programming (MIP) solvers. A few iterations of the Simplex method are performed in order to obtain good estimates on the effect of branching on a specific variable.

Another useful technique is the generation of cutting planes, especially for the initial LP relaxation. Some of these cutting planes are obtained by hypothetically branching on a variable. Examples are Chvátal-Gomory cuts, MIR cuts and Lift-and-Project cuts.

The goal of this thesis is to understand how the outcome of strong branching is influenced by cutting planes.

Knowledge of C/C++ is required in order to be able to work with the MIP solver framework SCIP.

For more information, please contact Matthias Walter.

#### MScThesis

Strong branching is a technique used in mixed integer programming (MIP) solvers. A few iterations of the Simplex method are performed in order to obtain good estimates on the effect of branching on a specific variable.

A new idea, called reverse strong branching, is quite different, but shall achieve the same goals. The goal of this thesis is to complete the idea, implement it within a state-of-the-art solver, and evaluate performance and quality of the approach.

Knowledge of C/C++ is required in order to be able to work with the MIP solver framework SCIP.

For more information, please contact Matthias Walter.