2020
DOI: 10.1038/s41598-020-71892-0
|View full text |Cite
|
Sign up to set email alerts
|

Classical simulation of boson sampling with sparse output

Abstract: Boson sampling can simulate physical problems for which classical simulations are inefficient. However, not all problems simulated by boson sampling are classically intractable. We show explicit classical methods of finding boson sampling distributions when they are known to be highly sparse. In the methods, we first determine a few distributions from restricted number of detectors and then recover the full one using compressive sensing techniques. In general, the latter step could be of high complexity. Howev… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 54 publications
0
4
0
Order By: Relevance
“…Known classical simulation methods for boson sampling with sparse outputs as they have been presented by Oh et al (2022a) and Roga and Takeoka (2020) have challenged these results to an extent, in that they argue that the instances considered when sampling from Franck-Condon factors are often sparse in the appropriate sense. Technically, this work demonstrates that the computationally costly support detection step, i.e., the localization of the largest element from a long list, can be reduced to solving an Ising model that can be solved in polynomial time under suitable conditions.…”
Section: Exploiting Structurementioning
confidence: 99%
See 1 more Smart Citation
“…Known classical simulation methods for boson sampling with sparse outputs as they have been presented by Oh et al (2022a) and Roga and Takeoka (2020) have challenged these results to an extent, in that they argue that the instances considered when sampling from Franck-Condon factors are often sparse in the appropriate sense. Technically, this work demonstrates that the computationally costly support detection step, i.e., the localization of the largest element from a long list, can be reduced to solving an Ising model that can be solved in polynomial time under suitable conditions.…”
Section: Exploiting Structurementioning
confidence: 99%
“…Both of the just discussed lines of work are contributions that show the potential of achieving computational advantages in practically motivated problems by using Gaussian boson sampling devices. At the same time, as the classical algorithms by Oh et al (2022a,b); and Roga and Takeoka (2020) show, it is less obvious whether efficient classical algorithms can be found that make use of the structure imposed on the Gaussian boson sampler that is exploited to solve a specific computational problem Certainly, in these cases there is certainly no complexity-theoretic reason analogous to the polynomial hierarchy collapse to believe in a quantum speedup. Rather, now we are moving into the realm of comparing quantum algorithms with the best classical algorithm for specific problems-as one would also expect when considering practically relevant problems.…”
Section: Exploiting Structurementioning
confidence: 99%
“…Since the original proposal several small-scale boson sampling experiments have been reported [4][5][6][7][8][9][10][11][12][13][14][15], with state-of-the-art ranging up to five photons with near-deterministic quantum dot sources [12][13][14]. From the opposite end, there has also been intense activity in the development of classical simulation algorithms [16][17][18], and current supercomputers are expected to simulate 50-photon experiments without much difficulty [17,19]. A more refined analysis of the complexity-theoretic arguments underpinning boson sampling has suggested 90 photons as a concrete milestone for the demonstration of quantum computational advantage [20].…”
Section: Introduction and Relation To Previous Workmentioning
confidence: 99%
“…For example, as detailed in Supplemental Material [99], we can also analyze the long-time complexity transition through the rank of W (t), which suggests that the time at which the problem becomes easy is smaller than t max . We conjecture that the classically hard regime exists in the red region, where U (t) does not satisfy known conditions for computing and/or sampling the distribution to be easy [63,86,101]; indeed, U (t) in the red region is typically full-rank, not sparse, and its components have various signs. However, We leave the detailed analysis for the red region as a future problem.…”
mentioning
confidence: 98%