Brown University
The Brown Theory Seminar is a weekly seminar run by the theory group in the Brown University CS Department. The seminar is focused on inviting speakers external to Brown that work across theoretical computer science.
Time and Location: All talks are held from noon to 1PM on Wednesdays in CIT 241 unless otherwise stated.
Mailing List: To get emails about new talks, you can subscribe to the theory mailing list here.
Calendar: You can subscribe to our Google Calendar with all the talk information here.
Recordings: We post talk recordings on our YouTube channel.
For Speakers: Please find relevant information here.
If you are interested in giving a talk, please get in touch with Peihan Miao (organizer for Fall 2024). Travel funds are available.
Date  Speaker  Title  Abstract  Recording 

Sept 11 (Wed)  Sebastien Roch  Some Sample Complexity Bounds in Phylogenomics  The reconstruction of species phylogenies from genomic data is a key step in modern evolutionary studies. This task is complicated by the fact that genes evolve under biological phenomena that produce discordant histories. These include hybrid speciation, horizontal gene transfer, gene duplication and loss, and incomplete lineage sorting, all of which can be modeled using random gene tree distributions building on wellstudied discrete stochastic processes (branching processes, the coalescent, random rearrangements, etc.). Gene trees are in turn estimated from molecular sequences using Markov models on trees. The rigorous analysis of the resulting complex models can help guide the design of new reconstruction methods with computational and statistical guarantees. I will illustrate the challenges and opportunities in this area via some recent sample complexity bounds. No biology background will be assumed. 
Talk 
Sept 20 (Fri)  Shivam Nadimpalli  Gaussian Polytope Approximation  We study the approximability of highdimensional convex sets by intersections of halfspaces, where the approximation quality is measured with respect to the standard Gaussian distribution and the complexity of an approximation is the number of halfspaces used. We establish a range of upper and lower bounds both for general convex sets and for specific natural convex sets that are of particular interest. We rely on techniques from many different areas, including classical results from convex geometry, Cramértype bounds from probability theory, and—perhaps surprisingly—a range of topics from computational complexity theory, including computational learning theory, unconditional pseudorandomness, and the study of influences and noise sensitivity in the analysis of Boolean functions. Based on joint work (https://arxiv.org/abs/2311.08575) with Anindya De and Rocco Servedio that will appear in FOCS 2024. 
Talk 
Sept 25 (Wed)  Kangning Wang  Metric Distortion in Social Choice  This talk will be about metric distortion in social choice. I will cover several major results (by us and others) in this area, and discuss open questions and directions. In a standard social choice scenario, voters express their preferences over candidates through votes, and then a voting rule aggregates those votes and selects one winning candidate. These votes can often be in the rankedchoice form, meaning that each voter submits her ranking of all the candidates. How efficient/effective can such voting be? In the metric distortion framework, we assume all voters and candidates reside in the same metric space, and each voter bears a cost equal to the distance between her and the elected candidate. The goal is to minimize the sum of costs over the voters. Various classical and novel voting rules can give the following guarantee: its sum of costs will be at most a constant times the true optimal sum of costs. Improving this guarantee has been the center of this research area, and this talk will review some recent progress. 
Talk 
Oct 2 (Wed)  Apoorv Vikram Singh  Moments, Random Walks, and Limits for Spectrum Approximation  Given randomwalk access to a graph, we study lower bounds for estimating the eigenvalues of the normalized adjacency matrix of the graph in ell1 norm. I will begin by introducing the problem of estimating eigenvalues of the graph in ell1 norm, go over a previous technique of [CohenSteiner et al., KDD 2018] for estimating the eigenvalues, and then prove our lowerbound instance, which proves that the technique of [CohenSteiner et al., KDD 2018] is optimal. I will also present one of the consequences of our lowerbound instance proving that there are distributions on [−1, 1] that cannot be approximated to accuracy eps in Wasserstein1 distance even if we know all of their moments to multiplicative accuracy (1 ± exp(1/eps)); this result matches an upper bound of [Kong and Valiant, Annals of Statistics 2017]. This talk is based on a joint work with Yujia Jin, Christopher Musco, and Aaron Sidford, which appeared in COLT 2023. 
Talk 
Oct 9 (Wed)  Zhengzhong Jin  Program Obfuscation, SNARGs, and Mathematical Logic  I will present a recent line of work that utilizes tools from mathematical logic—particularly bounded arithmetic developed by Cook, Buss, and many others—to make progress on two intriguing open questions in cryptography: (1) constructing succinct program obfuscation for Turing machines, where the obfuscated program size does not grow with an a priori upper bound on the input length, and (2) building succinct cryptographic proofs from standard assumptions that can compress the length of any NP witness. The ideas and tools may be helpful to other problems in cryptography and may also suggest new questions for proof complexity. 
Talk 
Oct 16 (Wed)  Nathan Klein  Ghost Value Augmentation for kEdgeConnectivity  We show that every fractionally kedgeconnected weighted graph (i.e. every solution to the canonical kedgeconnectivity linear program) can be rounded to an integral (k10)edgeconnected graph of no greater cost. This implies that for large constant values of k, fractional kedgeconnectivity and integral kedgeconnectivity are essentially the same. As a byproduct of this result, we show that one can produce a (k10)edgeconnected spanning subgraph (ECSS) of cost no more than the optimal kECSS, complementing the existing 2approximation. In addition, this result implies a 1+O(1/k) approximation for the kedgeconnected multisubgraph problem (kECSM), resolving a conjecture of Pritchard from 2011. This is joint work with Ellis Hershkowitz and Rico Zenklusen. 
Talk 
Oct 23 (Wed)  Simon Meierhans  AlmostLinear Time Algorithms for Partially Dynamic Graphs  A partially dynamic graph is a graph that undergoes edge insertions or deletions, but not both. In this talk, I present a unifying framework that resolves numerous wellstudied problems on such partially dynamic graphs almostoptimally. These include cycle detection, strongly connected components, st distances, transshipment, bipartite matching, maximum flow and minimumcost flow. We achieve this unification by solving the partially dynamic threshold minimumcost flow problem. In this problem, one is given a fixed demand vector and a threshold \(F\), and tasked with reporting the first time a flow of cost \(F\) exists or ceases to exist for insertions and deletions respectively. We give separate algorithms for solving this problem in the incremental and the decremental case. Both follow the L1IPM framework introduced as part of the first almost linear time minimumcost flow algorithm [ChenKyngLiuPengProbst GutenbergSachdeva, FOCS’22]. For handling edge insertions, we develop new powerful data structures [KyngMeierhansProbst Gutenberg, STOC’24] to solve the central minratio cycle problem against a general adversary [ChenKyngLiuMeierhansProbstGutenberg, STOC’24]. For handling edge deletions, we take the dual perspective. This leads us to a minratio cut problem, which we solve by constructing a dynamic tree that approximately preserves all cuts [van den BrandChenKyngLiuMeierhansProbst GutenbergSachdevea, FOCS’24]. 
Talk 
Oct 30 (Wed)  Clayton Stanford  Transformer Neural Networks Through a Communication Complexity Lens  Multilayer transformer models form the backbone of modern deep learning, yet there is little mathematical work detailing their benefits and deficiencies as compared with other architectures. This makes it difficult to answer practical and fundamental questions about the transformer architecture: What powers are provided by increasing the depth of the model? Can alternative architectures—including statespace models like RNNs or Mamba, GNNs, or lineartime attention models—improve efficiency without sacrificing expressivity? In this talk, I’ll present a communicationbased theoretical framework for understanding the representational capabilities and limitations of multilayer transformers. Concretely, this work establishes a bidirectional relationship between the transformer architecture and the Massively Parallel Computation (MPC) model of distributed computing. This connection tightly captures the representational powers of transformers whose depth grows logarithmically in the context length and provides a separation between the transformer and alternative architectures. These results imply that parallelizability is a key property of the standard transformer that cannot be easily replicated by other architectures. I’ll also discuss empirical results that suggest that these parallelizable logarithmicdepth constructions can be found by gradient descent in an interpretable manner. 

Nov 6 (Wed)  Tegan Wilson  Breaking the VLB Barrier for Oblivious Reconfigurable Networks  Valiant Load Balancing (VLB) has a long history in both the theory and practice of oblivious routing. Even after forty years, VLB continues to take center stage as a widely used — and in some settings, provably optimal — way to balance load in the network obliviously to the traffic demands. However, the ability of the network to rapidly reconfigure its interconnection topology gives rise to new possibilities. In this work we revisit the question of whether VLB remains optimal in the novel setting of reconfigurable networks. Prior work [AWSWKA’22] showed that VLB achieves the optimal tradeoff between latency and guaranteed throughput. In this work, we show that a strictly superior latencythroughput tradeoff is achievable when the throughput bound is relaxed to hold with high probability. Our results are enabled by a novel oblivious routing scheme that improves VLB by stretching routing paths the minimum possible amount — an additive stretch of 1 rather than a multiplicative stretch of 2 — yet still manages to balance load with high probability when either the traffic demand matrix or the network’s interconnection schedule are shuffled by a uniformly random permutation. This talk is based on joint work with Daniel Amir, Nitika Saran, Robert Kleinberg, Vishal Shrivastav, and Hakim Weatherspoon. 

Nov 13 (Wed)  Michael Kim  
Nov 20 (Wed)  Sam Hopkins  
Nov 27 (Wed)  No seminar (Thanksgiving)  
Dec 4 (Wed)  PhD students  Lightning talks by Brown PhD students 