In the Determinant Maximization problem, given an $$n \times n$$
n
×
n
positive semi-definite matrix $${\textbf {A}} $$
A
in $$\mathbb {Q}^{n \times n}$$
Q
n
×
n
and an integer k, we are required to find a $$k \times k$$
k
×
k
principal submatrix of $${\textbf {A}} $$
A
having the maximum determinant. This problem is known to be -hard and further proven to be [1]-hard with respect to k by Koutis (Inf Process Lett 100:8–13, 2006); i.e., a $$f(k)n^{{{\,\mathrm{\mathcal {O}}\,}}(1)}$$
f
(
k
)
n
O
(
1
)
-time algorithm is unlikely to exist for any computable function f. However, there is still room to explore its parameterized complexity in the restricted case, in the hope of overcoming the general-case parameterized intractability. In this study, we rule out the fixed-parameter tractability of Determinant Maximization even if an input matrix is extremely sparse or low rank, or an approximate solution is acceptable. We first prove that Determinant Maximization is -hard and [1]-hard even if an input matrix is an arrowhead matrix; i.e., the underlying graph formed by nonzero entries is a star, implying that the structural sparsity is not helpful. By contrast, Determinant Maximization is known to be solvable in polynomial time on tridiagonal matrices (Al-Thani and Lee, in: LAGOS, 2021). Thereafter, we demonstrate the [1]-hardness with respect to the rankr of an input matrix. Our result is stronger than Koutis’ result in the sense that any $$k \times k$$
k
×
k
principal submatrix is singular whenever $$k > r$$
k
>
r
. We finally give evidence that it is [1]-hard to approximate Determinant Maximization parameterized by k within a factor of $$2^{-c\sqrt{k}}$$
2
-
c
k
for some universal constant $$c > 0$$
c
>
0
. Our hardness result is conditional on the Parameterized Inapproximability Hypothesis posed by Lokshtanov et al. (in: SODA, 2020), which asserts that a gap version of Binary Constraint Satisfaction Problem is [1]-hard. To complement this result, we develop an $$\varepsilon $$
ε
-additive approximation algorithm that runs in $$\varepsilon ^{-r^2} \cdot r^{{{\,\mathrm{\mathcal {O}}\,}}(r^3)} \cdot n^{{{\,\mathrm{\mathcal {O}}\,}}(1)}$$
ε
-
r
2
·
r
O
(
r
3
)
·
n
O
(
1
)
time for the rank r of an input matrix, provided that the diagonal entries are bounded.