Brown (Computer Science) Theory Seminar

Brown University

In Fall 2023 the seminar was organized by Yu Cheng, Ellis Hershkowitz and Peihan Miao.

All talks were held from noon to 1PM on Wednesdays in CIT 477 (Lubrano) unless otherwise stated.

Fall 2023 Schedule

Date Speaker Title Abstract Recording
October 11 (Wed) Ainesh Bakshi Learning Quantum Hamiltonians at Any Temperature in Polynomial Time
We study the problem of learning a local quantum Hamiltonian H given copies of its Gibbs state ρ = exp(−βH) / tr(exp(−βH) ) at a known inverse temperature β > 0. Anshu, Arunachalam, Kuwahara, and Soleimanifar [AAKS20] gave an algorithm to learn a Hamiltonian on n qubits to precision ε with only polynomially many copies of the Gibbs state, but which takes exponential time. Obtaining a computationally efficient algorithm has been a major open problem [Alh22; AA23], with prior work only resolving this in the limited special cases (high temperature [HKT22] or commuting terms [AAKS21]).

We fully resolve this problem, giving a polynomial time algorithm for learning H to precision ε from polynomially many copies of the Gibbs state at any constant β > 0. Our main technical contribution is a new flat polynomial approximation to the exponential function, and a translation between multi-variate scalar polynomials and nested commutators. This enables us to formulate Hamiltonian learning as a polynomial system. We then show that solving a low-degree sum-of-squares relaxation of this polynomial system suffices to accurately learn the Hamiltonian.

No prior knowledge of quantum information is assumed. Based on joint work w/ Allen Liu, Ankur Moitra and Ewin Tang.
October 18 (Wed) Jiahui Liu Another Round of Breaking and Making Quantum Money: How to Not Build It from Lattices, and More
Public verification of quantum money has been one of the central objects in quantum cryptography ever since Wiesner’s pioneering idea of using quantum mechanics to construct banknotes against counterfeiting. So far, we do not know any publicly-verifiable quantum money scheme that is provably secure from standard assumptions.

In this talk, we provide both negative and positive results for publicly verifiable quantum money. In the first part, we give a general theorem, showing that a certain natural class of quantum money schemes from lattices cannot be secure. We use this theorem to break the recent quantum money scheme of Khesin, Lu, and Shor. In the second part, we propose a framework for building quantum money and quantum lightning we call invariant money which abstracts some of the ideas of quantum money from knots by Farhi et al. (ITCS’12). The security of this framework also leads to the security for a strengthening of quantum money, where not even the bank can duplicate banknotes.

Joint work with Hart Montgomery and Mark Zhandry.
October 25 (Wed) Jason Hartline Game Theory of Robust Algorithm Design
Robust algorithm design can be viewed as a zero-sum game between an algorithm designer and nature (aka, the adversary). In zero-sum games, every mixed-strategy equilibrium – corresponding to a randomized algorithm and randomization over inputs – has the same value and this value is guaranteed for all strategies of the opponent, even ones not played in equilibrium. Solving for this strategy is sometimes analytically tractable, but when not, it is sometimes computationally tractable. This talk gives three applications of zero-sum game theory to better understand robust algorithm design.

First, we consider the classical online algorithms problem of ski renter. We get a closed form solution for the algorithm with the optimal competitive ratio for any fixed time horizon. Moreover, we show that this algorithm is able to take advantage of mistakes of the adversary and is optimal in every subgame. Subgame optimality is a stronger property worst-case optimality and requires the algorithm to still do as well as possible outside the worst case. This requirement is stronger than the conventional worst-case-only approach.

Second, we consider the prior-independent ski-renter problem where weather is iid from an unknown distribution. We give a polynomial time approximation scheme for identifying an online algorithm with near optimal competitive ratio. This approximation scheme can be applied when (a) there is a small cover over the adversary’s (infinite) strategy space (in this case: a distribution over distributions), (b) the optimal algorithm for any adversary strategy is tractable, e.g., by dynamic programming, and (c) its performance can be computed. Applied to the ski-renter problem, the resulting algorithm improves significantly on the competitive ratio of worst-case algorithms.

This approximation scheme framework can be applied in general when missing information prevents good algorithms. Third, we apply the framework to information aggregation from multiple agents when the correlation structure is unknown. For this problem canonical robustness paradigms of ratio, regret, and absolute give significantly different results and we argue that for this problem regret is the better framework.

Joint work with Yongkang Guo, Zhihuan Huang, Aleck Johnsen, Yuqing Kong, Anant Shah, and Fang-Yi Yu.
October 26 (Thu)
CIT 368 (12-1)
Zihan Tan On \((1+\epsilon)\)-Approximate Flow Sparsifiers
Given a large graph G with a subset T=|k| of its vertices called terminals, a quality-q flow sparsifier is a small graph H that contains T and preserves all multicommodity flows that can be routed between terminals in T, to within factor q. The problem of constructing flow sparsifiers with good (small) quality and (small) size has been a central problem in graph compression for decades.

A natural approach of constructing \(O(1)\)-quality flow sparsifiers, which was adopted in most previous constructions, is contraction. Andoni, Krauthgamer, and Gupta constructed a sketch of size \(f(k,\epsilon)\) that stores all feasible multicommodity flows up to factor \((1+\epsilon)\), raised the question of constructing quality-\((1+\epsilon)\) flow sparsifiers whose size only depends on \(k,\epsilon\) (but not the number of vertices in the input graph G), and proposed a contraction-based framework towards it using their sketch result. In this paper, we settle their question for contraction-based flow sparsifiers, by showing that quality-\((1+\epsilon)\) contraction-based flow sparsifiers with size \(f(\epsilon)\) exist for all 5-terminal graphs, but not all 6-terminal graphs. Our hardness result on 6-terminal graphs improves upon a recent hardness result by Krauthgamer and Mosenzon on exact (quality-1) flow sparsifiers, for contraction-based constructions. Our construction and proof utilize the notion of tight span in metric geometry.

This talk is based on joint work with Yu Chen.
November 1 (Wed)
CIT 368 (1-2)
Kevin Yeo Cryptographic Data Structures: Limitations and Possibilities
Cryptographic data structures consider the setting where clients query for data stored by a potentially untrusted third-party server. Cryptographic data structures have been studied in many different settings including oblivious RAMs with no access pattern leakage as well as encrypted search with faster and/or more expressive queries at the potential cost of leakage.

In this talk, we will survey these various important cryptographic data structures to compare and contrast their advantages and disadvantages. We will discuss the best known constructions and several recent works that prove limitations. Finally, we will explore possible ways to design better cryptographic data structures by exploring gaps in current knowledge, weakening security guarantees to circumvent lower bounds or ignoring infrequent edge cases that end up being core assumptions used in lower bounds.
November 8 (Wed) Manon Revel A Mathematical Model for Representative Democracy
Is democracy legitimate? It may come as a surprise that mathematicians have contributed to answering this question for a very (very) long time. In this talk, we will explore social choice theory (the maths field researching democratic governance), discuss some of its most striking results, and review the recent developments in understanding how we can make democracy more legitimate. We will look into the probability and random graphs theories underpinning these works, and assess their relevance to the current crises of democracies.
November 15 (Wed) Zheng Dai Fundamental Limits on the Robustness of Image Classifiers
Image classifiers tend to induce brittle classes, in the sense that given an image belonging to some class, one can usually find an image belonging to a different class nearby. Is this brittleness a result of how classifiers are constructed, or is there some fundamental property of image spaces that causes this to happen? In this talk we will show that there is a surprisingly low limit to the level of robustness that an image classifier can attain, regardless of how they are constructed. Specifically, we will show that given some image space of n-by-n images, all but a tiny fraction of the images in any image class induced over that space can be moved outside that class by modifying only O(n) pixels. We will then generalize this result to arbitrary p-norms, and finally we will discuss how the bit-depth of the image space can affect these bounds.
November 22 (Wed)
No seminar (Thanksgiving Break)
November 29 (Wed) Václav Rozhoň Dijkstra on Steroids
We prove that Dijkstra’s shortest-path algorithm is universally optimal when combined with a sufficiently efficient heap data structure.

Universal optimality is a powerful beyond-worst-case performance guarantee for graph algorithms that informally states that a single algorithm performs as well as possible for every single graph topology. We give the first application of this notion to any sequential algorithm.

We design a new heap data structure with a working-set property guaranteeing that the heap takes advantage of locality in heap operations. Our heap matches the optimal (worst-case) bounds of Fibonacci heaps but also provides the beyond-worst-case guarantee that the cost of extracting the minimum element is merely logarithmic in the number of elements inserted after it instead of logarithmic in the number of all elements in the heap. This makes the extraction of recently added elements cheaper.

We prove that our working-set property is sufficient to guarantee universal optimality, specifically, for the problem of ordering vertices by their distance from the source vertex: The locality in the sequence of heap operations generated by any run of Dijkstra’s algorithm on a fixed topology is strong enough that one can couple the number of comparisons performed by any heap with our working-set property to the minimum number of comparisons required to solve the distance ordering problem on this topology.
December 6 (Wed) Peng Zhang Partitioning a Special Class of Cayley Graphs with Spectral Approximation
Given a graph \(G = (V, E)\), we want to partition the edge set of \(G\) into \(E_1\) and \(E_2\) such that the two edge-induced subgraphs \(G_1 = (V, E_1)\) and \(G_2 = (V, E_2)\) spectrally approximates \((1/2)G\) with a relative error \(O(\sqrt{\alpha})\), where \(\alpha\) is the maximum effective resistance between any two endpoints of an edge in \(G\). The proof of the Kadison-Singer conjecture by Marcus, Spielman, and Srivastava (Annals of Mathematics, 2015) implies the above edge partition always exists, but no polynomial-time algorithms are known. We explore polynomial-time algorithms for partitioning a special class of Cayley graphs of \(Z_n\) whose generators form an arithmetic progression. Our algorithm relies on partitioning the generators of a Cayley graph. Furthermore, we show if the generators are far from an arithmetic progression, there is no partition of the generators that yields two Cayley subgraphs with an error matching \(O(\sqrt{\alpha})\). This is joint with Surya Teja Gavva (CUNY).
December 8 (Fri) Daniel Mitropolsky The Simplest Neural Models, and a Hypothesis for Language in the Brain
How do neurons, in their collective action, beget cognition, as well as intelligence and reasoning? As Richard Axel recently put it, we do not have a logic for the transformation of neural activity into thought and action; discerning this logic as the most important future direction of neuroscience. I will present a mathematical neural model of brain computation called NEMO, whose key ingredients are spiking neurons, random synapses and weights, local inhibition, and Hebbian plasticity (no backpropagation). Concepts are represented by interconnected co-firing assemblies of neurons that emerge organically from the dynamical system of its equations. We show that it is possible to carry out complex operations on these concept representations, such as copying, merging, completion from small subsets, and sequence memorization. I will present how to use NEMO to implement an efficient parser of a small but non-trivial subset of English (leading to a surprising new characterization of context-free languages), and a more recent model of the language organ in the baby brain that learns the meaning of words, and basic syntax, from whole sentences with grounded input. We will also touch on lower bounds in the model, and the idea of a fine-grained complexity theory of the brain.