2020
DOI: 10.1109/tit.2020.2989385
|View full text |Cite
|
Sign up to set email alerts
|

Nearly Optimal Sparse Polynomial Multiplication

Abstract: In the sparse polynomial multiplication problem, one is asked to multiply two sparse polynomials f and g in time that is proportional to the size of the input plus the size of the output. The polynomials are given via lists of their coefficients F and G, respectively. Cole and Hariharan (STOC 02) have given a nearly optimal algorithm when the coefficients are positive, and Arnold and Roche (ISSAC 15) devised an algorithm running in time proportional to the "structural sparsity" of the product, i.e. the set sup… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
3
1

Relationship

1
9

Authors

Journals

citations
Cited by 21 publications
(13 citation statements)
references
References 25 publications
0
13
0
Order By: Relevance
“…Here the task is to compute the convolution of two sparse vectors much faster than performing FFT, ideally in near-linear time in terms of the input plus output size (i.e., the number of non-zero entries of the input and output vectors). Near-linear in the input plus output size running time was achieved for vectors with non-negative entries by Cole and Hariharan [CH02] and for general vectors in [Nak20], see also [GGdC20] for additional log m factors improvements. Very recently, a Monte Carlo O(k log k)-time algorithm has been achieved in [BFN21] for non-negative convolution, where k is the input plus output size.…”
Section: -Fold Casementioning
confidence: 99%
“…Here the task is to compute the convolution of two sparse vectors much faster than performing FFT, ideally in near-linear time in terms of the input plus output size (i.e., the number of non-zero entries of the input and output vectors). Near-linear in the input plus output size running time was achieved for vectors with non-negative entries by Cole and Hariharan [CH02] and for general vectors in [Nak20], see also [GGdC20] for additional log m factors improvements. Very recently, a Monte Carlo O(k log k)-time algorithm has been achieved in [BFN21] for non-negative convolution, where k is the input plus output size.…”
Section: -Fold Casementioning
confidence: 99%
“…Hsu and Shieh proposed a method with less addition and multiplication [19] in 2020. Besides, there are also some research studies on reducing the space complexity of polynomial multiplication [20,21]. However, the complexity of some research studies based on the acceleration of FFT remains at O(nlog(n)).…”
Section: Related Workmentioning
confidence: 99%
“…Another difference with the dense case is that studying the complexity in a pure algebraic model remains meaningless, unless you assume a transdichotomous model on the degree, meaning that the integer computation on the exponents is always (1) [3,25]. The classical approach for computing the product of two polynomials of sparsity T is to generate all the T 2 possible monomials, and to sort them and merge those of equal degree to collect the monomials of the result.…”
Section: Dense Polynomial Multiplicationmentioning
confidence: 99%