Brown (Computer Science) Theory Seminar

Brown University

The Brown Theory Seminar is a weekly seminar run by the theory group in the Brown University CS Department. The seminar is focused on inviting speakers external to Brown that work across theoretical computer science.

If you’d like to give a talk, please get in touch with the organizer: Ellis Hershkowitz! Travel funds are available.

If you’ve agreed to give a talk, please see this.

To get emails about new talks, you can sign up for the theory listserv by following this link.

You can sign up for our Google Calendar of talks by following this link.

We post talks on our YouTube channel.

Time and Location
All talks are held from noon to 1PM on Wednesdays in CIT 241 unless otherwise stated.


Spring 2024 Schedule


Date Speaker Title Abstract Recording
January 31 (Wed)
In CIT 316
Roie Levin Chasing Positive Bodies
We study the problem of chasing positive bodies in \(\ell_1\): given a sequence of bodies \(K_t = {x \in R^n_+ : C^t x ≥ 1,P^t x ≤ 1}\) revealed online, where \(C^t\) and \(P^t\) are nonnegative matrices, the goal is to (approximately) maintain a point \(x^t \in K_t\) such that \(\sum_{t} |x^t - x^{t-1}|_1\) is minimized. This captures the fully-dynamic low-recourse variant of any problem that can be expressed as a mixed packing-covering linear program and thus also the fractional version of many central problems in dynamic algorithms such as set cover, load balancing, hyperedge orientation, minimum spanning tree, and matching.

We give an \(O(\log d)\)-competitive algorithm for this problem, where \(d\) is the maximum row sparsity of any matrix \(C^t\). This bypasses and improves exponentially over the lower bound of \(\sqrt{n}\) known for general convex bodies. Our algorithm is based on iterated information projections, and, in contrast to general convex body chasing algorithms, is entirely memoryless. We also show how to round our solution dynamically to obtain the first fully dynamic algorithms with competitive recourse for all the stated problems above; i.e. their recourse is less than the recourse of every other algorithm on every update sequence, up to polylogarithmic factors. This is a significantly stronger notion than the notion of absolute recourse in the dynamic algorithms literature.

This is joint work with Sayan Bhattacharya, Niv Buchbinder, and Thatchaphol Saranurak.
Talk
February 7 (Wed)
In CIT 316
Ted Pyne Opening Up the Distinguisher: A Hardness to Randomness Approach for RL = L that Uses Properties of RL
The Hardness to Randomness paradigm, which obtains derandomization from circuit lower bounds, has proven essential in understanding time-bounded derandomization (BPP vs P). However, there has been relatively little interest in using the paradigm to derandomize small-space algorithms, i.e. Randomzied Logspace vs Logspace (RL vs L).

This is, perhaps, understandable. In light of the unconditional progress on RL vs. L, it was unclear if the hardness to randomness paradigm was necessary. Even worse, all existing progress on derandomizing RL used special properties of RL algorithms (that they use their randomness in a read-once fashion), but it was unknown how to exploit this for hardness to randomness results, leaving the required lower bounds far out of reach.

We provide compelling evidence that the hardness-to-randomness paradigm can drive progress on unconditional derandomization of RL. We prove that derandomization follows from remarkably weak lower bounds, in particular lower bounds against deterministic, uniform models for which strong (but non-matching) lower bounds can already be proved. Our proofs are crucially non-black-box, and use special properties of space-bounded algorithms.

Joint work with Dean Doron and Roei Tell.
Talk
February 14 (Wed)
1:30-2:30, Lubrano (CIT 477)
Hung Le New Decompositions for Planar Graphs and Applications
We recently introduced a notion of shortcut partition, which is a partition of a planar graph into clusters of small diameter such that for every pair of vertices, we could get from one to another by hopping through a few clusters in the decomposition. This partition (and its construction) turned out to be the key to resolving several open problems in planar graphs, including the Steiner point removal problem, constructing a low-treewidth embedding, and a tree cover of constant size. In this talk, I will describe the construction of the shortcut partition and mention some of its applications.
Talk
February 21 (Wed)
No seminar
       
February 28 (Wed)
No seminar
       
March 6 (Wed)
No seminar
       
March 13 (Wed)
No seminar
       
March 20 (Wed)
No seminar
       
March 27 (Wed)
No seminar (Spring Break)
       
April 3 (Wed)
1:30-2:30, Lubrano (CIT 477)
Huy Lê Nguyễn High Probability Convergence of Stochastic Gradient Descent Methods
Stochastic optimization is a well-studied area with many applications in a variety of domains from machine learning, to operation research, numerical linear algebra and beyond. In contrast with deterministic algorithms, stochastic algorithms might fail, and a pertinent question is to understand how often this happens and how to make changes to increase the success rate. This question is especially important not only in critical applications where failure is not acceptable but also in many machine learning applications where each run is very expensive and time consuming. In this talk we will survey some recent results on the high probability convergence of different gradient descent methods in different levels of noise and different types of objective functions.
Talk
April 10 (Wed) Rajesh Jayaram Data-Dependent LSH for the Earth Mover’s Distance
In this talk, we discuss the first improvement in approximation for nearest neighbor search under the Earth Mover’s Distance (EMD) in over a decade. Given two sets of s vectors A,B in high dimensional space (R^d), the EMD between A and B is the minimum cost of a perfect matching between the vectors in A and B where the edge weights are given by the distances in Euclidean space. EMD is a classic metric in computer science (dating back over 100 years to the Hungarian algorithm of Jacobi), and a standard distance between two sets of points. In nearest neighbor search, one has a collection of n such sets A1,…,An, which one pre-processes so that given a query set Q, one can quickly return a set Ai whose distance (under EMD) is within a C-factor of the nearest neighbor to Q.

To date, the only algorithm with sublinear O(n^eps) query time was given by Andoni, Indyk, and Krauthgamer ([AIK, SODA 08]), who designed a (data-independent) locality sensitive hash function (LSH) for EMD with approximation O(log^2 s). In this work, we improve this approximation to Õ(log s) in the same runtime by designing the first data-dependent LSH functions for EMD. The talk will discuss the main techniques behind sublinear algorithms for EMD, including the tree embeddings techniques of [AIK’08] and [Chen, Jayaram, Levi, Waingarten STOC ‘22], as well as the key insights needed to glue them together into a improved LSH for EMD.

Joint work with Erik Waingarten and Tian Zhang (STOC ‘24, https://arxiv.org/abs/2403.05041).
Talk
April 18 (Thurs)
12:00-1:00, CIT 241, note unusual day
Yevgeniy Dodis Random Number Generation and Extraction
Generating random numbers is an essential task in cryptography. They are necessary not only for generating cryptographic keys, but are also needed in steps of cryptographic algorithms or protocols (e.g. initialization vectors for symmetric encryption, password generation, nonce generation). Indeed, the lack of insurance about the generated random numbers can cause serious damages in cryptographic protocols, and vulnerabilities that can be exploited by attackers. In this talk we revisit a surprisingly rich landscape of the area of random number generation, ranging from theoretical impossibility results to building real-world random-number generators (RNGs) for Windows, Apple and Linux. Some example topics include impossibility of basing cryptography on entropy alone, improved key derivation functions, seedless randomness extraction, design and analysis of “super-fast” entropy accumulation found in most modern RNGs, and post-compromise security of RNGs in light of “premature next” attacks.
 
April 24 (Wed) Alex Psomas Theory and Practice of Fair Food Allocation
Food rescue organizations worldwide are leading programs aimed at addressing food waste and food insecurity. Food Drop is such a program, run by the non-governmental organization Indy Hunger Network (IHN), in the state of Indiana, in the United States. Food Drop matches truck drivers with rejected truckloads of food — food that would otherwise be discarded at a landfill — to food banks. Matching decisions are currently made by the Food Assistance Programs Manager of IHN. In this talk, I will discuss a partnership with IHN with the goal of completely automating Food Drop. Motivated by this collaboration, I will present a series of theoretical models and results for the fair division of indivisible goods, with each model refining our understanding of the essence of IHN’s problem. These results directly informed our choice of algorithm for matching drivers to food banks for the platform we built for IHN.
 
May 1 (Wed) Anson Kahng Automating Ethical Decision-Making with Virtual Democracy
Virtual democracy aims to automate ethical decision-making by learning models of individuals’ ethical preferences and letting these models participate in elections over possible actions. My work studies aggregation methods, or voting rules, for making a final decision based on predicted votes. I will discuss our efforts to implement virtual democracy through a partnership with a nonprofit organization in the context of food rescue. (This talk will hopefully dovetail nicely with Alex’s talk the previous week!)