- cc.complexity-theory time-complexity na.numerical-analysis
- Updated Thu, 02 Jun 2022 06:24:11 GMT

There are various algorithms that output the eigenvalues of an $n \times n$ matrix in time $O(n^3)$. However, I can't find anywhere that tells me about the precision of the output of the algorithm. That is, what is the time complexity of increasing the number of bits I require the answer to?

Furthermore, how does this rely on the number of bits we use to specify the initial matrix? For example, if I specify the matrix elements to exponential precision in $n$.

Any starting point for this would be helpful.

I do not have a complete answer to your question but here are some thoughts and references. First of all, it is crucial to specify the model of computation and the representation of the input, I expect this post will clarify why. I know of essentially two cases:

- the input matrix has coefficients that are algebraic numbers (*): then the eigenvalues are also algebraic numbers and can be determined effectively, one can also determine the multiplicities of the eigenvalues. There exists very good algorithms to get rational approximations of algebraic numbers such a [1] where the bit complexity is $O((b+n)n^2)$ (neglecting some log factors) where $b$ is the number of bits and $n$ is the degree of the polynomial that represents the algebraic number (which in turns depends polynomially on the heigh of the coefficient of the matrix).
- the input matrix has coefficients that can be arbitrary real numbers but we can query arbitrary rational approximations of them: this fits into the theory of Computable Analysis and things get complicated. It is known that eigenvalues are uniformly computable (because they are roots of the characteristic polynomial) from the input coefficient [2], the algorithm runs in polynomial time (in the precision), probably using [1] it runs in linear time (if we ignore the cost of computing approximations of the input). However the eigenvectors are not computable nor are multiplicities, unless you known them in advance [3].
- your do not care about the representation of numbers and assume all arithmetic operations have a unit cost: this is the BSS model and [4] gives a very precise
*arithmetic*complexity that is logarithmic in the precision (ie $O(\log b)$ where $b$ is the number of bit). But of course this completely neglects the complexity of representing and operating on the numbers and assume that equality is a unit operation (whereas it is uncomputable in general). I personally find this model to be very inadequate to express the complexity of this problem.

Based on your question, I suspect your matrix has rational inputs (which are algebraic numbers of degree 1), thus it would fit in the first case. In this case, I think (but have not checked) that the bit complexity will be linear (maybe with polylogarithmic factors) in the precision, but the constant will depend on the size and numbers in the input matrix (most probably in a polynomial fashion).

(*) algebraic numbers can be represented exactly with a finite number of bits, all arithmetic operations (and more) on algebraic numbers can be performed effectively, however the complexity details on the height of the numbers, thus on field extension in which the coefficients of the input lie.

[1] Optimal and nearly optimal algorithms for approximating polynomial zeros, Pan, 1996

[2] Complexity Theory of Real Functions, Ko

[3] A Computable Spectral Theorem, Brattka and Ziegler

[4] The Complexity of the Matrix Eigenproblem, Pan and Chen

- +0 – Thanks for the detailed answer. Would you expect the complexity in precision to change if I were to specify the matrix had $O(1)$ integer coefficients (maybe even just 0 and 1)? — Nov 06, 2017 at 12:49
- +0 – An additional question: what is the cause of the exponential overhead between "normal" complexity and arithmetic complexity? Surely arithmetic operations only give a polynomial overhead? — Nov 06, 2017 at 12:57
- +0 – This is really down to computing roots of integer polynomial and [1] is already linear in $b$ the number of bits which is optimal (probably you can improve the dependence on $n$ if the matrix has 0 and 1 coefficients). The arithmetic complexity ignores the size of the representation. Example: let $x_b=2^b$, then computing $x_b$ takes at least $\Omega(b)$ binary operations (you need to write at least $b$ bits) but it takes only $O(\log b)$ arithmetic operations (fast exponentiation). Notice how each multiplication doubles the size of the number: this is the exponential overhead. — Nov 06, 2017 at 13:03
- +0 – Apologies, I didn't realise $O(b)$ was optimal. Thanks for the answer -- that makes perfect sense! — Nov 06, 2017 at 13:14