Algorithms and Optimization under Uncertainty

NII Shonan Meeting:

@ Shonan Village Center, May 22-25, 2017


    • Niv Buchbinder, Tel-Aviv University, Israel
    • Nikhil Devanur, Microsoft Research, USA
    • Debmalya Panigrahi, Duke University, USA


Description of the meeting

In the classical offline computational model, an algorithm operates on a specified set of input data to produce a desired output. While this model has propelled much of computer science, modern applications typically do not afford the luxury of complete knowledge and certainty of the input data. Thus, the typical real world algorithm today has to make optimization decisions with incomplete information. This can happen due to several reasons:

  • The algorithm has to make decisions before the entire input has been generated. For instance, online advertisements on webpages have to be allocated to advertisers without the knowledge of future opportunities. Similarly, a sequence of requests for computational resources in a data center has to be served without knowing future requests.
  • The entire input is available but is prohibitively large for algorithmic efficiency. This is especially true in today’s era of big data. For instance, a real-time search algorithm on the Internet cannot afford to scan the entire web index for every single query, nor can an online social network scan the entire network structure before making every algorithmic decision.
  • Part of the input is unavailable or unreliable, because of genuine estimation errors, geographical separation in the case of distributed systems, or due to strategic misreporting. For example, clients requesting online services and advertisers showing online advertisements can misreport their preferences and requirements if that yields a better outcome for them than truthful behavior. In other cases, such as in estimating the instantaneous load of a distributed online system, it is genuinely difficult to make accurate estimates, which has triggered recent interest in models of computation in the dark.

In this workshop, we plan to bring together world-class researchers from multiple domains of algorithm design with the goal of discussing and furthering research in this area of algorithms under uncertainty. The main focus will be on optimization problems that are central in algorithmic research, and our objective will be to promote activity in theoretical research on these problems in limited information settings. Recent research in this area has had a tremendous practical impact in the area of online advertising, and the workshop aims to foster development of the theory to have a similar impact in other areas such as cloud computing.

Online algorithms and Competitive analysis. The field of online computation in which the input is presented piece-by-piece to the algorithm played an important role in the design of data structures, scheduling, memory management, and other fundamental areas in computer science [2]. Recently, there has been a renewed interest in this area due to new applications that emerged in online systems, and especially Internet advertising. For example, the design of data centers for large scale computing poses new challenges motivated by aspects such as fair and efficient online scheduling of heterogeneous clients in environments such as Hadoop and Map-Reduce. Important problems in this area include mapping of virtual machines into real machines (modeled as online bin packing), efficient (oblivious/online) routing of traffic between racks of machines within a data center, online pricing of cloud computing services, and so on. An additional resource that these online schedulers must now optimize for is energy usage, with energy costs poised to become the dominant expenditure in the operation of data centers.

In addition to these myriad applications, recent research interest in the online algorithms community has also been piqued by the development of fundamentally new ideas and techniques – in particular, the import of the linear programming toolkit applicable to a broad class of online problems [3]. A combination of powerful ideas in primal dual algorithms, online learning, and LP rounding has led to progress in longstanding questions such as the k-server problem [1]. This makes it an opportune time to bring together researchers in the theory and practice of online algorithms with the twin goals of making fundamental progress on longstanding questions and developing a principled approach for applied problems of practical importance.

Beyond worst-case analysis and connections to machine learning. While research in online algorithms and competitive analysis has yielded rich dividends in both theory and practice, there is a growing belief among researchers that we also need to explore possibilities beyond worst case (competitive) analysis. For example, in online advertising, while future demands are not exactly known in advance, reasonably sound predictions based on historically observed data is commonly used in practice. This has led to a growing body of work in these domains that focuses on average case analysis, encompassing various stochastic models including i.i.d. models with (known/unknown) input distributions and random permutation models. Improvements in algorithmic performance have been noted in these average-case settings over the corresponding worst-case models, and have been successfully employed in real world online systems.

Recent research has suggested that such gains can be extended to other domains as well, most notably in the broad class of optimization problems that can be encoded as packing/covering linear programs; e.g., this includes scheduling problems that we touched upon earlier. From a technical perspective, this research is bringing together two powerful toolkits in theoretical computer science – online learning and combinatorial optimization. At the heart of online learning is the bandits framework capturing the “explore-exploit tradeoff” that characterizes optimization with incomplete information. While optimization under uncertainty and online learning have similar goals at a high level, there are crucial differences in the techniques commonly used, and successful transition of technical tools between the two areas is a recent and nascent phenomenon. In this workshop, we plan to bring together experts in these two areas for the exchange of ideas and problems, and hope to trigger collaborations between these two groups of researchers. For example, we want to encourage the following question: can no-regret bounds that are ubiquitous in online learning also be obtained in classical online problems? We believe that this is a promising direction for overcoming the limitations of worst-case competitive analysis, where one of the key definitional obstacles has been in its performance metric that compares against an offline optimum. We believe that ideas from queuing theory and other related domains could also be useful in this regard, and we plan to bring together researchers from these areas to enable cross-fertilization of ideas.


  1. Nikhil Bansal, Niv Buchbinder, Aleksander Madry, and Joseph Naor. A polylogarithmic-competitive algorithm for the k-server problem. ACM, 62(5):40, 2015.
  2. Allan Borodin and Ran El-Yaniv. Online computation and competitive analysis. Cambridge University Press, 1998.
  3. Niv Buchbinder and Joseph Naor. The design of competitive online algorithms via a primal-dual approach. Foundations and Trends in Theoretical Computer Science, 3(2-3):93–263, 2009.

Comments are closed.