Exponential Quantum Communication Advantage in Distributed Inference and Learning
arxiv(2023)
Abstract
Training and inference with large machine learning models that far exceed the
memory capacity of individual devices necessitates the design of distributed
architectures, forcing one to contend with communication constraints. We
present a framework for distributed computation over a quantum network in which
data is encoded into specialized quantum states. We prove that for models
within this framework, inference and training using gradient descent can be
performed with exponentially less communication compared to their classical
analogs, and with relatively modest overhead relative to standard
gradient-based methods. We show that certain graph neural networks are
particularly amenable to implementation within this framework, and moreover
present empirical evidence that they perform well on standard benchmarks. To
our knowledge, this is the first example of exponential quantum advantage for a
generic class of machine learning problems that hold regardless of the data
encoding cost. Moreover, we show that models in this class can encode highly
nonlinear features of their inputs, and their expressivity increases
exponentially with model depth. We also delineate the space of models for which
exponential communication advantages hold by showing that they cannot hold for
linear classification. Our results can be combined with natural privacy
advantages in the communicated quantum states that limit the amount of
information that can be extracted from them about the data and model
parameters. Taken as a whole, these findings form a promising foundation for
distributed machine learning over quantum networks.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined