Efficient parallelization of tensor network contractions for simulating quantum computation
Cite this dataset
Huang, Cupjin et al. (2021). Efficient parallelization of tensor network contractions for simulating quantum computation [Dataset]. Dryad. https://doi.org/10.5061/dryad.nk98sf7t8
Abstract
In this paper, we demonstrate a classical simulation framework for quantum computation by contracting tensor networks of sizes previously deemed out of reach. The main contribution of this work is a parallelization scheme called index slicing that breaks down an infeasibly large tensor network contraction task into smaller subtasks that can be executed fully in parallel, without interdependencies or intermediate communications. As a benchmarking example, we show that our algorithm can reduce the simulation of the Sycamore random circuit sampling task to less than 20 days, achieving an acceleration of over five orders of magnitude compared to the original proposal. We then showcase the capabilities of the simulation framework via investigations of near-term quantum algorithms and quantum error correction. Given the ubiquity of tensor networks in quantum information science, we believe that our simulation framework will be a valuable tool in the era of quantum information technology.
Methods
This dataset contains contraction schemes used in our classical simulation demonstration, including random quantum circuit simulation, quantum approximate.
Usage notes
Please see Readme file.
Funding
United States Army Research Office, Award: W911NF-16-1-0349
National Science Foundation, Award: 1717523