NO.221 Algorithmic tools and their applications in emerging models of computation
March 24 - 28, 2025 (Check-in: March 23, 2025 )
Organizers
- Guy Even
- Tel-Aviv University, Israel
- Robert Krauthgamer
- Weizmann Institute of Science, Israel
- Gregory Schwartzman
- JAIST, Japan
Overview
Description of the Meeting
The algorithms community has made significant strides in the development and advancement of various algorithmic tools. These tools have proven to be essential in solving complex computational problems efficiently and effectively in specific settings and computational models. We propose to hold a Shonan meeting that brings together experts from diverse disciplines, aiming to distill these algorithmic tools and expand their applications to various emerging computational models, including parallel computing, distributed computing, data-stream algorithms, sampling algorithms, and quantum computing.
Algorithmic Tools
- Interior Point Methods (IPM): Interior point methods have revolutionized the field of convex optimization by providing efficient solutions to linear and nonlinear programming problems. These methods, combined with the Laplacian Paradigm and the development of special data structures have led to dramatic improvements in classical combinatorial problems, such maximum ow and min-cost ow. We will explore the applicability of interior point methods in emerging models of computation, and discuss their potential in solving optimization problems in these models.
- Random Projections: Random projections have gained attention as a very effective method to reduce the data's dimensionality while preserving its structure. We will investigate the applications of random projections in emerging models of computation, including their impact on data-analysis tasks such as clustering, classification, and nearest neighbor search.
- Graph Decompositions: Graph decomposition techniques have played a major role in the design of parallel, distributed, online and dynamic algorithms. In recent years, expander decompositions have been key to the development of new algorithms for Laplacian solvers, for dynamic graph problems, and for routing. We will explore the potential of decomposition methods in emerging models of computation, and discuss their efficiency and scalability in these models.
- Smoothed Analysis: Smoothed analysis is a framework that combines worst-case analysis with random perturbations to provide a more realistic understanding of algorithmic performance. We will investigate the applicability of smoothed analysis in emerging models of computation, and discuss how it can help analyze the robustness and stability of algorithms in these models.
- Computing with Advice (from Machine Learning): The amazing recent success of machine learning, including deep learning, has spurred interest in algorithms that can leverage advice provided by machine-learning models. We will explore the possibilities of computing with machine-learning advice in streaming, parallel, distributed, and quantum computing, and discuss how these models can enhance the efficiency and effectiveness of algorithms in various applications.
Computational Models
- In distributed computing, there is a network in which every node has a local input. The nodes communicate via the network links in synchronous rounds, and the goal is to compute local outputs that satisfy some common (global) objective. For example, the goal could be to compute a minimum spanning tree in the network, and each node knows only which of its incident links belong to the computed tree. The goal in distributed algorithms is to design algorithms with a minimum number of communication rounds.
- In data structures, the data is organized in way that facilitates operations over this data very efficiently. For example, one may want to maintain a set of elements and support basic operations such as: insertions, deletions, and various types of queries. The goal in data structures is twofold: (i) find data structures that require as little space as possible, and (ii) support the operations as fast as possible. Locality is captured in data structures by the fact that memory is accessed in blocks. Hence, each read or write to memory does not access a single bit or word, but rather a whole block of words. Accessed blocks are stored in a special fast memory called a cache, and thus subsequent accesses to words in the cache are much faster than accessing a word whose block is not in the cache. In practice, access to slow memory often dominate the computation time, and thus it is desirable to pack information so that operations access as few blocks as possible. Indeed, classical data structures such as linked lists and search trees that employ pointers are not efficient with respect to this definition of locality. The challenge in data structures is to understand the tradeoffs between the space requirements and the time it takes to process various operations. This challenge is addressed via algorithms and lower bounds.
An algorithm provides a data structure with upper bounds on the space usage and running time of operations, while a lower bound sets minimal limits on the space and running time, or a tradeoff between the two.
- In the data-stream model, the algorithm can read the input only sequentially and has a bounded space. In particular, it cannot store much of the input in its own memory. This model arises naturally in communication networks, where a great deal of information traverses a router. Such a router has limited memory resources, operates under strict performance requirements, and must additionally monitor the traffic to detect events such as cyber attacks. The challenge is to design algorithms that can deduce interesting properties of the input in spite of space limitations. This challenge is addressed via algorithms and lower bounds. An algorithm provides an upper bound on the required space, while a lower bound tells us how much space must be used by every possible algorithm.
- In parallel computing, multiple processors or cores can execute computation simultaneously, in order to speed up the execution time of algorithms. Indeed, by dividing the workload across multiple processors, many tasks can be executed with significantly improved performance, enabling us to solve problems of much larger scale, which has become extremely important with the demise of Moore's law (for sequential computation). Efficient communication and synchronization mechanisms are crucial for achieving optimal performance in parallel models.
- Quantum computing utilizes the principles of quantum mechanics to perform computations. Quantum computers utilize quantum bits, abbreviated qubits, which can represent multiple states simultaneously, allowing for parallel-like processing. Quantum computing has the potential to solve certain problems more efficiently than classical computers, especially those related to factorization, optimization, and simulation. Quantum algorithms are designed to take advantage of the unique properties of quantum systems, such as superposition and entanglement, to provide computational advantages in specific domains.
The above fields are extremely active, and continuously grow in popularity year by year (papers in these fields are consistently presented in all major theoretical computer science conferences, such as STOC, FOCS, and SODA).
These fields build on the classical field of algorithm design, while leveraging the advantages, and adhering to the constraints, of the different models. Many new algorithms and algorithmic techniques have been developed recently. These recent results are promising candidates for becoming the foundational basis for the above fields.
The proposed meeting will bring together leading researchers in the aforementioned fields, so that they can share their recent results, insights and vision. We anticipate that new ideas and collaborations will result from a meeting dedicated to this topic “algorithmic tools and their applications to emerging models of computations". The potential participants of the meeting come from leading institutions in various geographies, specially from Israel, Japan, Europe and the USA. We hope the proposed meeting will strengthen the academic connection between these international research communities, especially between Japanese and non-Japanese researchers.
We believe a meeting with this diverse mix of researchers can have a very positive impact on these fields, as ideas and techniques from one subfield can be applied to related subfield.