- na.numerical-analysis sparse-matrix
- Updated Wed, 01 Jun 2022 16:15:53 GMT

I am currently trying to understand whether I can make some claims about the relation between eigenspaces of a sparsifier and the original matrix. In this context, let me first define a couple of entities.

**Definition 1**: Partial Ordering Among Matrices:
We say that two commuting matrix $\mathbf A,\mathbf B$ have a partial ordering $\mathbf A \preceq \mathbf B$ if the matrix $\mathbf B - \mathbf A$ is positive semidefinite, i.e. $\vec x^T (\mathbf B - \mathbf A) \vec x \geq 0 $ for all non-zero $\vec x$.
Now with this definition we can define an $\epsilon$-approximation of the graph to be $(1- \epsilon) \mathbf B \preceq \mathbf A \preceq (1+\epsilon) \mathbf B$, i.e. $\mathbf B$ is an $\epsilon$-approximation of $\mathbf A$ (definitions vary here slightly).

Spielman and Peng famously gave the first algorithm to spectrally sparsify a graph, i.e. given a graph $G=(V,E)$, the algorithm returns a graph $H$ such that $H$ is an $\epsilon$ spectral approximation of $G$, with average degree $\mathcal{O}(1/\epsilon^2)$ (~ sparsity). These spectral sparsifiers can then in turn be used to solve e.g. Laplacian systems in nearly linear time (nowadays I believe linear time was achieved by Joannis Koutis et al., or others around M. B. Cohen).

Now the question I have is the following: Usually when comparing the solution of linear systems, instead of inverting the matrix, some sort or polynomial is used to approximate the inverse. Hence, the eigenspaces of the (pseudo-)inverse and the polynomial is the same and we can easily show that the eigenvalues of the polynomial and the inverse are all $\epsilon$-close. How does this change for a general case, wehere $\mathbf B$ is a spectral sparsifier of $\mathbf A$? Can I say anything about the relationship between eigenspaces?

My idea was to use the eigenvalue decomposition of both matrices and project the basis of $\mathbf B$ into the basis of $\mathbf A$. However this doesn't seem to give me a more qualitative understanding. Are there any ways to bound the differences eigenspaces? It seems that in the worst case the sum of plus and minus terms could cancel out each other allowing for quite large ranges of distances (from the projections) between the actual eigenvectors of $\mathbf A$ and $\mathbf B$?

I am not sure how you measure distance between eigenspaces, but I do not think you can say anything non-trivial here without some assumption that the eigenvalues of $A$ are separated. For example, let $u$ and $v$ be two arbitrary unit vectors, and define $A = I + \varepsilon uu^\intercal$ and $B = I + \varepsilon vv^\intercal$. Then $(1-\varepsilon)B \preceq A \preceq (1+\varepsilon)B$. However, the leading eigenvector of $A$ is $u$ and the leading eigenvector of $B$ is $v$, and, since they were arbitrary, they can be as far apart as you want.

- +0 – yes that makes sense. But if we look at this concrete example, it also seems that since $1$ gives a complete basis we could just add an $\epsilon$ fraction to all the basis vectors, i.e. distributing $uu^T$ and all the basis vectors would then still be close, in that case even less than $\epsilon$-close, or is that incorrect? — Dec 04, 2017 at 15:41
- +0 – I am not sure if I am getting your comment correctly. You could pick two arbitrary orthonormal bases $u_1, \ldots, u_n$ and $v_1, \ldots, v_n$, and distinct numbers $0 < \varepsilon_n <\ldots < \varepsilon_1 \le 1$, and define $A = I + \sum_{i = 1}^n{\varepsilon_i u_i u_i^T}$ and $B = I + \sum_{i = 1}^n{\varepsilon_i v_i v_i^T}$. $A$ and $B$ are still $(1+\varepsilon)$-close, but the unique eigenbasis of $A$ is $u_1, \ldots, u_n$, while the unique eigenbasis of $B$ is $v_1, \ldots, v_n$. Again, the two eigenbases are arbitrary and you can make them as far as you want, in whatever measure. — Dec 04, 2017 at 16:39
- +0 – Because of such examples, results that talk about stability of eigenvectors always need to assume that the eigenvalues are somehow separated. — Dec 04, 2017 at 16:43
- +0 – The ordering implies also that solving linear systems in the one operator is similar to the other one. Using the pseudo inverse this is similar to applying the inverse of the eigenvalues times the outer product of the eigenvectors. Hence is it correct to assume that even though the eigenspaces might be different, they ultimately will have a very close decomposition? — Dec 08, 2017 at 10:58
- +0 – I don't know what this means: "even though the eigenspaces might be different, they ultimately will have a very close decomposition". — Dec 09, 2017 at 06:26