Seminars

NO.184 New Frontiers in Locality in Computation

Shonan Village Center

March 20 - 23, 2023 (Check-in: March 19, 2023 )

Organizers

  • Guy Even
    • Tel-Aviv University, Israel
  • Pierre Fraigniaud
    • CNRS / IRIF, France
  • Gregory Schwartzman
    • JAIST, Japan

Overview

*Important: the venue is changed to Shonan OVA from the original site, Shonan Village Center.

Locality is a recurring theme in computation. In fact, computation, as defined by Turing, is a sequence of steps each of which is determined by a constant amount of information. The notion of computing based on partial information is common to many fields, such as: property testing, distributed computation, data structures, sublinear algorithms, and streaming algorithms. The goal of this workshop is to explore commonalities in these fields, to improve our understanding of possibilities and limitations of locality in computation, to study reductions and relations between local models of computations, and to assess which practical aspects are captured by various models of local computation.

Locality, as a computational paradigm, has been considered from different perspectives, resulting with a variety of computational models. We briefly review a few such models in an informal manner:

  1. In property testing, we are given access to an object via probes, and the goal is to decide whether the object satisfies a specific property or is far from satisfying the property. This decision is based on making as few probes as possible. For example, the object could be a function f :{1, . . . , n} → {1, . . . , n}, and we wish to test whether the function is monotone. Locality is captured in property testing by the fact that the decision is based on very little information. Indeed, the input is accessed via probes that reveal only a very small amount of information (e.g., each probe reveals f(x) on a requested argument x). The challenge in property testing is to understand the effect of the object's size and the precision of the test on the number of probes required to test a given property. This challenge is addressed via algorithms and lower bounds. An algorithm provides an upper bound on the number of probes, and a lower bound tells us how many probes must be made by any algorithm.
  2. In distributed computing, there is a network in which every node has a local input. The nodes communicate via the network links in synchronous rounds, and the goal is to compute local outputs that satisfy some common (global) objective. For example, the goal could be to compute a minimum spanning tree in the network. Thus, each node in the end knows which of the links that are incident to it belong to the minimum spanning tree. The goal in distributed algorithms is to design algorithms that minimize the number of communication rounds.
    Locality is captured in distributed computing by the fact that nodes communicate only with their neighbors. The challenge in distributed computing is to understand the effect of the network's size and topology on the number of communication rounds required to complete the computation. This challenge is addressed via algorithms and lower bounds. An algorithm provides an upper bound on the number of rounds, and a lower bound tell us how many rounds must be used by any algorithm.
  3. In data structures, we are interested in finding ways to organize data so that operations over this data are fast. For example, one may want to maintain a set of elements subject to operations such as: insertions, deletions, and various types of queries. The goals in data structures is twofold: (i) find data structures that require as little space as possible, and (ii) support the operations as fast as possible. Locality is captured in data structures by the fact that memory is accessed in blocks. Hence, each read or write to memory does not access a single bit or word but rather a whole block of words. Accessed blocks are stored in a special fast memory called a cache. Accesses to words in the cache are much faster than accessing a word whose block is not in the cache. In fact, access times to slow memory usually dominate the computation time. Hence, it is desirable to pack information so that operations access as few blocks as possible. Indeed, classical data structures such as linked lists and search trees that employ pointers are not efficient with respect to this definition of locality. The challenge in data structures is to understand the tradeoffs between the space requirements and the time it takes to process various operations. This challenge is addressed via algorithms and lower bounds. An algorithm provides a data structure with upper bounds on the running times of operations, while a lower bound sets minimal limits on the space of a data structure or the time required to perform operations.
  4. In sublinear algorithms, we are interested in developing algorithms that perform approximate optimization while reading only a miniscule fraction of the input. The interest in sublinear algorithms is motivated by the fact that modern data sets are too big to be entirely read within a reasonable amount of time. For example, consider the task of studying common features in genomes of different species. With such huge data sets, even linear time algorithms may be too slow, hence the need for sublinear algorithms. Locality is captured in sublinear algorithms by the fact that the decision is only based on a miniscule part of the input. The challenge in sublinear algorithms is to minimize the running time and in particular the fraction of the input that needs to be read. This challenge is addressed via algorithms and lower bounds. An algorithm provides an upper bound on the fraction of the input that needs to be read (as well as on the running time), and a lower bound tell us how much of the input must be read by any algorithm.
  5. In streaming algorithms, we are allowed to read the input once with a bounded space algorithm. In particular, we cannot save the whole input in our memory. Streaming algorithms occur naturally in communication networks where a great deal of information traverses a switch. Such a switch has limited memory resources, operates under strict performance requirements, and must nevertheless monitor the traffic to detect events such as cyber attacks. Locality is captured in streaming algorithms by the fact that the algorithm has very limited space. Hence the algorithm maintains a reduced or compressed view of the input it read so far. The challenge is to design algorithms that can deduce interesting properties of the input in spite of space limitations. This challenge is addressed via algorithms and lower bounds. An algorithm provides an upper bound on the required space, while a lower bound tells us how much space must be used by any algorithm.

Fields dealing with locality in computation are rather active, keep growing in popularity year by year (papers in these fields are consistently presented in all major theoretical computer science conferences such as STOC, FOCS, and SODA). These fields build on the classical field of algorithm design, while taking into account the constraints imposed by various models of local computation.
Many new algorithms and algorithmic techniques have been developed recently. These recent results are promising candidates for becoming the foundational basis for the field of local computation.

The proposed meeting will bring together researchers from various branches of the dynamic field of local computation so that they can share their recent results and insights. We anticipate that new ideas and collaborations will result from a meeting dedicated to the topic of local algorithms. The potential participants of the meeting come from various countries and institutions, specifically from Israel, Japan, Europe and the USA. We hope the proposed meeting will strengthen the academic connection between these international research communities, especially between Japanese and non-Japanese researchers.

The participants are invited from various fields that deal with local computation. We believe a meeting with this diverse mix of researchers can have a very positive impact on these fields, as ideas and techniques from one subfield can be applied to related subfield.

The main research topics that will be discussed in the meeting are:

Commonalities and Differences between Local Models of Computation. Common to all these models of local computation is the information bottleneck. In fact, modern data sets are too big to fit in the fast memory of a single computer, and data is often distributed among several computers or locations. The differences between the models is how the algorithm accesses the data: on one extreme, property testing allows the algorithm to choose the probes. On the other extreme, streaming algorithms may read the input only once, and have severe space limitations (which limits what can be remembered about the input read so far).
Algorithmic techniques have been developed in each of the fields mentioned above. In distributed computing, decomposition algorithms have been playing a major role in the development of algorithms. In property testing, a different form of decomposition, Szemeredi's Regularity Lemma, has been employed (e.g., testing for triangle freeness). In sublinear algorithms and streaming algorithms, dimension reduction algorithms and low rank matrix approximations have been employed. This rich and sophisticated set of algorithmic tools are tailored for the model at hand. We wish to explore the possibilities of exporting these techniques between the various models.

Reductions between Models of Local Computation. An ambitious goal is to devise methods that transform an algorithm in one model of local computation to another. Indeed, such relations have been suggested before: for example, how to turn a sublinear algorithm that reads inputs that are "close" to each other into a distributed algorithm. Along these lines, one might ask whether there exist families of distributed algorithms that can be transformed into streaming algorithms. Such reductions are important both for algorithms and lower bounds. Indeed, if we knew of a lower bound in model B and a reduction from model A to model B, then this would imply a lower bound in model A.

New Abstractions of Locality for New Applications. The model of streaming algorithms was devised to capture the computational limits and demands of data network switches. Newer applications that include blockchains (e.g. Bitcoin), autonomous vehicles, and modern cellular networks (e.g. 5G) pose other challenges that need to be distilled. What aspects of these applications are captured by existing models of local computation? Can we suggest other models that capture additional characteristics of newer applications?

The Power of Randomization in Local Computation. In some models, such as property testing and sublinear algorithms, the need for randomness is inherent. In a recent breakthrough, by Vaclav Rozhon and Mohsen Ghaffari (reported in Shonan Meeting 162 on Distributed Graph Algorithms), a polylogarithmic round deterministic distributed algorithm was presented for the Maximal Independent Set Problem (MIS). This breakthrough almost closes the gap between random and deterministic distributed algorithms for MIS. However, the power of randomness remains an open problem for many problems in different models of local computation. One intriguing example in data structures is whether randomness is crucial for constant time space efficient dictionaries. All designs to date rely on random hash functions. Understanding the power of randomization in the design of local algorithms is of fundamental importance. Any progress in this direction will have significant implications for the field.