The Brown Theory Seminar is a weekly seminar organized by the theory group in the Department of Computer Science at Brown University. The seminar invites external speakers to present their research on a wide range of topics in theoretical computer science.
Time and Location: All talks are held on Wednesdays noon-1pm in Lubrano (CIT 477) unless otherwise noted.
Stay Updated:
Subscribe to the theory mailing list to receive announcements about upcoming talks.
Here is a Google Calendar link with the seminar schedule.
Recordings of past talks are available on our YouTube channel.
For Speakers:
If you are interested in giving a talk, please contact Richard Huang and Phuoc Pham Van Long (organizers for Spring 2026).
Relevant information for invited speakers.
| Date | Speaker | Title | Abstract | Recording |
|---|---|---|---|---|
| Feb 11 (Wed) | Zhuan Khye Koh | A Strongly Polynomial Algorithm for Linear Programs with At Most Two Nonzero Entries per Row or Column | We give a strongly polynomial algorithm for minimum cost generalized flow, and as a consequence, for all linear programs with at most two nonzero entries per row or column. Previously, strongly polynomial algorithms were only known for the primal and dual feasibility problems. Our approach is to show that the path-following interior point method of Allamigeon et al. â22 terminates in a strongly polynomial number of iterations for minimum cost generalized flow. We achieve this by bounding the straight line complexity of the central path, which is the minimum number of pieces required by a piecewise affine curve to multiplicatively approximate the central path. Based on joint work with Daniel Dadush, Bento Natura, Neil Olver and LĂĄszlĂł VĂ©gh. |
Talk |
| Feb 18 (Wed) CIT 241 | Aaron (Louie) Putterman | Breaking the \(\sqrt{n}\) Barrier: New Parallel Algorithms for Finding a Matroid Basis | Over 40 years ago, Karp, Upfal, and Wigderson initiated the study of a fundamental question in parallel computation: how many adaptive rounds are required to find a basis of a matroid using only polynomially many independence queries (that is, queries that test whether a set is independent)? Their pioneering work established an upper bound of \(O(\sqrt{n})\) rounds and a lower bound of roughly \(n^{1/3}\) rounds; these bounds have remained unchanged since. In this talk, I will present recent progress on this question. We give a new parallel algorithm that, with high probability, finds a matroid basis in \(\sim O(n^{3/7})\) rounds, improving upon the classical \(O(\sqrt{n})\) bound. For the important special case of partition matroids, we obtain an optimal \(\sim O(n^{1/3})\) round algorithm, essentially settling the round complexity in this setting. Our approach introduces a new matroid decomposition technique that may be of independent interest and also yields faster parallel algorithms for the classic matroid intersection problem. This is joint work with Sanjeev Khanna and Junkai Song (NYU). |
Talk |
| Feb 25 (Wed) | Gregory Kehne | Coverage in Approval-Based Committee Voting | What does it mean for a set of alternatives to collectively represent the preferences of a population? In multiwinner elections and participatory budgeting, a common standard is the justified representation (JR) axiom family, which formalizes the idea that a winning set of \(k\) candidates is representative only if any \(1/k\)-fraction of voters that form a unified bloc has a winning candidate. My aim is to provide a theoretical introduction to the area, draw connections to coverage and cardinality-constrained submodular optimization, and present recent algorithmic results in models motivated by representative committee selection at scale. This is based on joint work with Ulrike Schmidt-Kraepelin and Krzysztof Sornat, and with Zhiyi Huang and Chutong Yang. |
 |
| Mar 4 (Wed) | Alexander Poremba | The Boundary Between Classical and Quantum Constraint Satisfaction Problems | Constraint satisfaction problems (CSPs) are a central object of study in theoretical computer science. In quantum Hamiltonian complexity, classical constraints are replaced by local quantum Hamiltonian terms, and questions about satisfiability become questions about the ground energy of a quantum system. While general quantum Hamiltonians can exhibit highly nonclassical behavior, an important special caseâcommuting Hamiltoniansâlies much closer to classical CSPs: their terms are simultaneously diagonalizable, and many classical techniques continue to apply. However, physical many-body Hamiltonians are rarely exactly commuting, which naturally motivates the study of âalmost commutingâ systems. Despite their relevance, the implications of approximate commutation are only poorly understood. In this talk, I will present some new work on almost commuting Hamiltonians which lie on the boundary between classical and quantum constraint satisfaction. We show how to efficiently approximate any almost commuting \(2\)-local qubit Hamiltonian by a commuting one. Specifically, we give a locality-preserving ârounding techniqueâ that maps any \(2\)-local Hamiltonian \(H=\sum_{i=1}^m h_i\) with \(|[h_i,h_j]| \leq \epsilon\) to a nearby Hamiltonian \(\hat{H}\) whose terms pairwise commute, and which satisfies \(|H-\hat{H}| = O(m,\epsilon^{1/6})\). As a consequence, we prove that \(\delta\)-approximations to the ground energy of \(\epsilon\)-almost commuting \(2\)-local qubit Hamiltonians lie in \(\mathsf{NP}\) whenever \(\delta \gg m\epsilon^{1/6}\), extending classical tractability well beyond the exactly commuting setting. Finally, we present two applications of our rounding framework: Quantum Gibbs sampling and fast Hamiltonian simulation for almost commuting systems. Joint work with Islam Faisal (BU) and Anand Natarajan (MIT). |
Talk |
| Mar 11 (Wed) | Daniel Alabi | The Existence of Error-Correcting Codes Implies Privacy Lower Bounds | We will discuss classic and new results that use error-correcting codes to prove lower bounds on the privacy-utility tradeoffs of differentially private mechanisms. The existence of such codes has been used to show the hardness of privately estimating simple counting queries. Typically, the lower bounds are proven via packing-style arguments and the use of fingerprinting codes. New results extend such arguments, from the simple counting query settings over a finite domain, to situations where the data might be drawn from an infinite (but structured) domain. |
Talk |
| Mar 18 (Wed) | Alma Ghafari | Lower Bounds for Non-adaptive Local Computation Algorithms | We study non-adaptive LCAs. A classical reduction of Parnas and Ron converts any distributed algorithm into a non-adaptive LCA. Applied to known distributed algorithms, this yields non-adaptive LCAs for constant approximations of Maximum Matching (MM) and Minimum Vertex Cover (MVC) with query complexity â^O(log â/ log log â), where â is the maximum degree. With adaptivity, this bound improves dramatically to poly(â), but is such a gap necessary, or are there better non-adaptive LCAs? We answer this question affirmatively. Our main result is a matching lower bound: any non-adaptive LCA computing a constant approximation of MM or MVC requires â^O(log â/ log log â) queries. This is the first separation between adaptive and non-adaptive LCAs, and shows that the Parnas-Ron reduction is essentially optimal for MM or MVC in the non-adaptive setting. Our proof blends techniques from two distinct lines of work: sublinear-time algorithm lower bounds and distributed computing lower bounds. We build on the technique of coupling over acyclic subgraphs, Behnezhad, Roghani, and Rubinstein, and apply it to a modified version of the construction of Kuhn, Moscibroda, and Wattenhofer. A key technical contribution is showing that this modified KMW instance has an important property: random walks of any length identify a matching edge with probability at most â^O(log â/ log log â), a substantial strengthening of the original KMW result, which only handled walks of depth O(logÎ/loglogÎ). |
Talk |
| Mar 25 (Wed) | No seminar (Spring Break) | Â | Â | Â |
| Apr 2 (Thu) in CIT 316 at 3PM-4PM | Oliver Richardson | The Inconsistency Quantification Problem | Probabilistic Dependency Graphs (PDGs) are a powerful modeling framework that has two closely related semantics that play two seemingly different roles. On one hand, PDGs specify a joint distribution; in this capacity, they are an especially modular and interpretable generalization of traditional probabilistic graphical models. On the other hand, PDGs can be inconsistent, and have a natural inconsistency measure; this allows them not only to capture many situations in machine learning, but also to provide a principled justification of many loss functions. We will start with a whirlwind tour of the modeling power of PDGs and motivate the information-theoretic measure of inconsistency that underlies their semantics. We will then see a sketch of how inference can be done efficiently by reduction to convex programming, and situate the inconsistency approximation problem among others in the literature. Surprisingly (at least to me), there is an almost-linear reduction from approximate PDG inference to inconsistency approximationâmeaning that precisely quantifying oneâs inconsistencies is no easier than optimally resolving them. This delivers a nice unification of the two roles that PDGs play, at a deep algorithmic level. We will conclude with some related open problems. |
Talk |
| Apr 8 (Wed) | Miranda Christ | Pseudorandom Error-Correcting Codes with Applications to Watermarking Generative AI | Motivated by the growing need to identify AI-generated content, we ([CG24]) introduced a powerful new framework for generative AI watermarking. This framework leverages a new cryptographic primitive called a pseudorandom error-correcting code (PRC). A PRC is an error-correcting code with the property that any polynomial number of codewords are pseudorandom to any efficient adversary. Since the introduction of PRCs, there has been a flurry of exciting works strengthening their properties and implementing them in practice. I will highlight two newer works with my collaborators ([AAC+25, CGG+26]) in which we define and construct improved PRCs motivated by applications. In the first, we construct a notion of an ideal PRC, where robustness holds even against an adversary that can query encoding and decoding oracles. In the second, we present improved constructions from an assumption used previously to construct doubly-efficient PIR. This construction is also the first with plausible subexponential security. This is based on works with Sam Gunn, Omar Alrabiah, Prabhanjan Ananth, Yevgeniy Dodis, Noah Golowich, Ankur Moitra, and Daniel Wichs: CG24, AAC+25, CGG+26. |
Talk |
| Apr 15 (Wed) | Ning Luo | Towards Practical ZeroâKnowledge Proof for PSPACE | Zero-knowledge proofs (ZKPs) allow one party to prove a statement without revealing anything beyond its truth. While efficient ZKP systems exist for NP problems such as SAT, extending them to PSPACE has so far remained theoretical. In this talk, I will present our recent work on practical ZKPs for PSPACE-complete problems via proof systems for quantified Boolean formulas (QBFs). Our first protocol validates quantified-resolution (Q-RES) proofs in zero knowledge, and our second protocol proves knowledge of winning strategies (Skolem/Herbrand functions) by reducing them to unsatisfiability or validity proofs in ZKP. On the QBFEVAL benchmark suite, these protocols verify ~79% of instances within 100 seconds, whenever Q-RES proofs or winning strategies are available. I will also discuss applications of ZKPs for PSPACE to partial equivalence checking in hardware, conformant planning in AI, and black-box checking in software analysis, together with our evaluation results on these domains. This work has been accepted to IEEE S&P 2026. |
 |
| Apr 22 (Wed) | Matteo Riondato | HomeRun: Performing Curveball Trades in Streaming for Fast Null Modeling of Graphs, Hypergraphs, and Binary Matrices | State-of-the-art algorithms for randomizing (or ânull-modelingâ) graphs, hypergraphs, binary matrices, and market basket datasets are based on the Curveball trade, a procedure that takes two n-dimensional binary vectors (e.g., 1-hop neighborhood vectors, hyperedges, columns, transactions), and transforms them into two new vectors with the same L1 norm, bitwise-XOR, and bitwise-AND as the original ones. Existing approaches make multiple passes over the input vectors, and require other operations with O(n) computational cost. We present HomeRun, an algorithm to perform a Curveball trade in two passes over the data. Joint work with Michelle Contreras-Catalan, appearing at WWWâ26. |
 |
| Apr 29 (Wed) CIT 241 | Zachary Espiritu | Leafblower: a Leakage Attack Against TEE-Based Encrypted Databases | Trusted execution environments (TEEs) have emerged as a common solution for database systems to provide encryption in use. Several encrypted databases (EDBs) have been deployed within TEEs using library operating system toolchains that transparently allow existing applications to run within TEEs without modification. This âlift-and-shiftâ paradigm greatly simplifies the design of EDBs but leaves open questions about the security of the resulting system. In this talk, we propose a new leakage attack against TEE-based EDBs which use B+-trees in the multi-snapshot external memory model, a weaker adversary which only observes snapshots of the encrypted database index files after each operation. We show how to approximately order insertions by their inserted value by exploiting the âstructural leakageâ of the on-disk index format. Then, we leverage auxiliary information to recover the approximate plaintext values of insert operations with significant advantage over a naive adversary that makes guesses based on equivalent auxiliary information. Under optimal conditionsâwhen the auxiliary is accurate and the domain is smallâwe achieve up to 96% exact recovery in experiments on real-world datasets which increases to 100% when scoped to later operations in the transcript. Our attack requires no injections and no information about read operations. While our work is primarily motivated by TEE-based encrypted databases, we demonstrate that our attack generalizes to other kinds of page-level encryption systems including encrypted storage engines and disaggregated database systems. |
 |