-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
seeking to simplify find-diagonalizer-in-e-basis
#841
Comments
I was wrong about this. The real QZ decomposition
Note the significant divergence between the factorizations, despite the small difference between You can see how deceptive the above can be. For some reason I was under the impression that, because The Python code I had (by the way, qz was not even my idea, I found it from some stackexchange post somewhere) was half baked and not thoroughly tested. It did work on the examples that I tried, but that's not enough in this case. The broader approach of this orthogonal factorization is that we are trying to take advantage of the commutation of the real and imaginary parts of a unitary matrix U, by instead simultaneously diagonalizing these. According to random strangers this is a hard problem, but there may be some algorithms (e.g. the mentioned one using "Jacobi angles") which can tackle this. On the other hand, there seem to be some strings attached, and these algorithms are not standard lapack routines. |
For what it's worth, I spent a couple of hours scouring LAPACK to see if I could spot an easy routines for what we want, but to no success. It's still not clear to me whether there's something clever that we can do here that I'm missing. |
|
There's also NUMERICAL METHODS FOR SIMULTANEOUS DIAGONALIZATION, which this rando Matlab file claims to implement. |
Since Copy-pasta of the above screenshot as textAlgorithm 8.7.1 Given A = A^T ∈ IR^{n×n} and B = B^T ∈ IR{n×n} with B positive definite, the following algorithm computes a nonsingular X such that X^T A X = diag(a1 , . . . , an ) and X^T B X = I_n.Compute the Cholesky factorization B = G G^T using Algorithm 4.2.2. |
I think |
find-diagonalizer-in-e-basis
can be simplified and use generalized Schur decompositionfind-diagonalizer-in-e-basis
Sure, but I think we can modify the above approach since we don't need import numpy as np
from scipy.stats import unitary_group
from rich import print
from rich.progress import track
import typer
def decomp_uut(u: np.ndarray) -> np.ndarray:
uut = u @ u.T
a, b = uut.real, uut.imag
_, g = np.linalg.eig(b)
g_1 = np.linalg.pinv(g)
c = g_1 @ a @ g_1.T
_, v = np.linalg.eig(c)
return g_1.T @ v.T
def assert_is_almost_diag(x: np.ndarray):
assert (d := np.abs(x - np.diag(np.diag(x))).max()) < 1e-8, f"Max off diag: {d}"
def test_decomp_uut(
iterations: int = typer.Option(100_000, help="Number of tests to run"),
dim: int = typer.Option(10, help="Dimension of square matrices"),
seed: int = typer.Option(8675309, help="PRNG seed"),
):
print(f"[magenta]Running {iterations:,} test(s)[/magenta]")
rng = unitary_group(dim=dim, seed=seed)
for _ in track(range(iterations), description="[cyan]Testing…[/cyan]"):
u = rng.rvs()
uut = u @ u.T
a, b = uut.real, uut.imag
x = decomp_uut(u)
assert_is_almost_diag(x.T @ a @ x)
assert_is_almost_diag(x.T @ b @ x)
print("[bold][green]Passed![/green][/bold]")
if __name__ == "__main__":
typer.run(test_decomp_uut) |
Untested, but I think this should do in Lisp: (defun decomp-uut (u)
(let* ((uut (magicl:@ u (magicl:transpose u)))
(a (magicl:.realpart uut))
(b (magicl:.imagpart uut)))
(multiple-value-bind (_ g) (magicl:eig b)
(declare (ignore _))
(let* ((g-inv (magicl:inv g))
(g-inv-transpose (magicl:transpose g-inv))
(c (magicl:@ g-inv a g-inv-transpose)))
(multiple-value-bind (_ v) (magicl:eig c)
(declare (ignore _))
(magicl:@ g-inv-transpose (magicl:transpose v))))))) |
@genos Looking into this now. |
@genos I implemented this in #850; I don't have more specific feedback but I'm finding the following. First, for random unitaries, it seems to work. I'm essentially running your code verbatim, except I'm orthogonalizing after and ensuring determinant = 1.
This includes passing a couple math tests. However, when running within QUILC, I get errors, namely:
These are similar errors to what we were getting with @kilimanjaro's approach. It seems that low-dimensional subsets of the unitary group are particularly troublesome. |
Continuing the last message, we see that
|
Disregard, had a typo.
Maybe the real and imaginary parts each must be non-singular for this to work? |
|
|
Continuing with the from icecream import ic
import numpy as np
ic(cphase := np.diag([1, 1, 1, 1j])) # cphase(π / 2) 1 0
a, b = cphase.real, cphase.imag
ic(a, b)
b_vals, g = np.linalg.eig(b)
ic(b_vals, g)
ic(g_inv := np.linalg.pinv(g))
ic(c := g_inv @ a @ g_inv.T)
c_vals, v = np.linalg.eig(c)
ic(c_vals, v)
ic(x := g_inv.T @ v.T)
ic(x.T @ a @ x)
ic(x.T @ b @ x)
(ql:quickload :magicl)
(let* ((cphase (magicl:from-list '(1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 #C(0 1))
'(4 4)
:type '(complex double-float)))
(a (magicl:.realpart cphase))
(b (magicl:.imagpart cphase)))
(multiple-value-bind (b-vals g) (magicl:eig b)
(let* ((g-inv (magicl:inv g))
(g-inv-transpose (magicl:transpose g-inv))
(c (magicl:@ g-inv a g-inv-transpose)))
(multiple-value-bind (c-vals v) (magicl:eig c)
(let ((x (magicl:@ g-inv-transpose (magicl:transpose v))))
(loop :for (name value)
:in (list `(a ,a)
`(b ,b)
`(b-vals ,b-vals)
`(g ,g)
`(g-inv ,g-inv)
`(c ,c)
`(c-vals ,c-vals)
`(v ,v)
`(x ,x)
`(x^T-a-x ,(magicl:@ (magicl:transpose x) a x))
`(x^T-b-x ,(magicl:@ (magicl:transpose x) b x)))
:do (format t "~A: ~A~%" name value)))))))
|
The |
Apologies for getting Python all over your from icecream import ic
import numpy as np
def decompose(u: np.ndarray) -> np.ndarray:
if np.isreal(u).all():
_, x = np.linalg.eig(u)
else:
a, b = u.real, u.imag
_, g = np.linalg.eig(b)
g_inv = np.linalg.pinv(g)
c = g_inv @ a @ g_inv.T
_, v = np.linalg.eig(c)
x = g_inv.T @ v.T
return x
def is_almost_diag(x: np.ndarray) -> bool:
return np.abs(x - np.diag(np.diag(x))).max() < 1e-8
cnot = np.array([[1, 0, 0, 0], [0, 0, 0, 1], [0, 0, 1, 0], [0, 1, 0, 0]])
cphase = np.diag([1, 1, 1, 1j])
for u in [cnot, cphase]:
ic(u)
a, b = u.real, u.imag
ic(x := decompose(u))
ic(x.T @ a @ x)
ic(is_almost_diag(x.T @ a @ x))
ic(x.T @ b @ x)
ic(is_almost_diag(x.T @ b @ x))
|
Hacky Lisp attempt (ql:quickload :magicl)
(defun real->complex (m)
"Convert a real matrix M to a complex one."
(let ((cm (magicl:zeros
(magicl:shape m)
:type `(complex ,(magicl:element-type m)))))
(magicl::map-to #'complex m cm)
cm))
(defun decompose (u)
(let* ((a (magicl:.realpart u))
(b (magicl:.imagpart u))
(abs-max-b (reduce #'max (magicl::storage (magicl:map #'abs b))))
(x (if (< abs-max-b 1e-8)
(multiple-value-bind (_ x) (magicl:eig u)
(declare (ignore _))
x)
(multiple-value-bind (_ g) (magicl:eig b)
(declare (ignore _))
(let* ((g-inv (real->complex (magicl:inv g)))
(g-inv-transpose (magicl:transpose g-inv))
(c (magicl:@ g-inv (real->complex a) g-inv-transpose)))
(multiple-value-bind (_ v) (magicl:eig c)
(declare (ignore _))
(magicl:@ g-inv-transpose (magicl:transpose v))))))))
x))
(let ((cnot (magicl:from-list '(1 0 0 0
0 0 0 1
0 0 1 0
0 1 0 0)
'(4 4)
:type '(complex double-float)))
(cphase (magicl:from-list '(1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 #C(0 1))
'(4 4)
:type '(complex double-float))))
(loop :for (name unitary)
:in (list `(cnot ,cnot)
`(cphase ,cphase))
:do (progn
(format t "~A: ~A~%" name unitary)
(let* ((a (magicl:.realpart unitary))
(b (magicl:.imagpart unitary))
(x (decompose unitary))
(x^t (magicl:transpose x))
(x^t-a-x (magicl:@ x^t (real->complex a) x))
(x^t-b-x (magicl:@ x^t (real->complex b) x)))
(loop :for (k v)
:in (list `(a ,a)
`(b ,b)
`(x ,x)
`(x^t ,x^t)
`(x^t-a-x ,x^t-a-x)
`(x^t-b-x ,x^t-b-x))
:do (format t "~A: ~A~%" k v))))))
|
Python is OK, and good catch on the CPHASE typo. My bad. I'll look into your proposed change. |
I suppose we’d probably need a second special case for if the matrix is purely imaginary. |
I handled the real=0 and imag=0 cases, and this is the next failure I get:
|
In this case $UU^T = I\otimes\frac{1}{\sqrt{2}}\begin{pmatrix}1 & -i \\ -i & 1\end{pmatrix}$. |
Sorry for so much whack-a-mole on this one 😞 I think the issue here is that from icecream import ic
import numpy as np
ic(uut := np.kron(np.eye(2), 1 / np.sqrt(2) * np.array([[1, -1j], [-1j, 1]])))
a, b = uut.real, uut.imag
ic(a, b)
b_vals, g = np.linalg.eig(b)
ic(b_vals, g)
ic(x := np.linalg.pinv(g))
ic(x.T @ a @ x)
ic(x.T @ b @ x)
|
So:
import numpy as np
from rich import print
from rich.table import Table
def is_diag(m: np.ndarray) -> bool:
return np.abs(m - np.diag(np.diag(m))).max() < 1e-8
def decomp(uut: np.ndarray) -> np.ndarray:
a, b = uut.real, uut.imag
if is_diag(a):
_, x = np.linalg.eig(b)
elif is_diag(b):
_, x = np.linalg.eig(a)
else:
_, g = np.linalg.eig(b)
g_inv = np.linalg.pinv(g)
c = g_inv @ a @ g_inv.T
_, v = np.linalg.eig(c)
x = g_inv.T @ v.T
return x
table = Table(title="UU^T Whack-a-Mole")
table.add_column("Unitary")
table.add_column("X^TAX Diagonal?")
table.add_column("X^TBX Diagonal?")
for name, unitary in [
("cnot", np.array([[1, 0, 0, 0], [0, 0, 0, 1], [0, 0, 1, 0], [0, 1, 0, 0]])),
("cphase", np.diag([1, 1, 1, 1j])),
("latest", np.kron(np.eye(2), 1 / np.sqrt(2) * np.array([[1, -1j], [-1j, 1]]))),
]:
a, b = unitary.real, unitary.imag
x = decomp(unitary)
table.add_row(name, str(is_diag(x.T @ a @ x)), str(is_diag(x.T @ b @ x)))
print(table)
|
Nope, that just fails another test. In trying diff --git a/src/compilers/approx.lisp b/src/compilers/approx.lisp
index 2913110..6c21563 100644
--- a/src/compilers/approx.lisp
+++ b/src/compilers/approx.lisp
@@ -179,12 +179,6 @@
(not (double~ 0.0d0 (magicl:tref m i j))))
(return-from diagonal-matrix-p nil)))))
-(defun zero-matrix-p (m)
- (dotimes (i (magicl:nrows m) t)
- (dotimes (j (magicl:ncols m))
- (when (not (double~ 0.0d0 (abs (magicl:tref m i j))))
- (return-from zero-matrix-p nil)))))
-
(defun real->complex (m)
"Convert a real matrix M to a complex one."
(let ((cm (magicl:zeros
@@ -204,10 +198,10 @@ are diagonal. Return (VALUES X UU^T).
(a (magicl:map #'realpart uut))
(b (magicl:map #'imagpart uut)))
(cond
- ((zero-matrix-p a)
+ ((diagonal-matrix-p a)
(values (nth-value 1 (magicl:eig b))
uut))
- ((zero-matrix-p b)
+ ((diagonal-matrix-p b)
(values (nth-value 1 (magicl:eig a))
uut))
(t we fail
with
|
I think whack a mole is the only way for me, a lowly software engineer, to figure this out. (: I'm just going to call it experimental mathematics to save face. :) |
Is there a general problem for unitary matrices of the form |
Oh, we could have |
I did write a function to detect whether something looks like |
Pushed that here: 5473335 |
Just found out about Takagi's Factorization, which I think is what we're looking for. Math Overflow has a Python version, as does Strawberry Fields. |
import numpy as np
import scipy.linalg as la
from rich import print
from rich.table import Table
latest = np.array(
[
[-0.965 + 0.133j, 0.093 + 0.000j, -0.000 + 0.000j, -0.000 + 0.206j],
[0.093 + 0.000j, 0.965 + 0.133j, 0.000 - 0.206j, -0.000 + 0.000j],
[-0.000 + 0.000j, 0.000 - 0.206j, 0.965 - 0.133j, -0.093 + 0.000j],
[-0.000 + 0.206j, -0.000 + 0.000j, -0.093 + 0.000j, -0.965 - 0.133j],
]
)
def decomp_via_takagi(m: np.ndarray) -> np.ndarray:
"""https://math.stackexchange.com/a/4448242"""
n = m.shape[0]
a, b = m.real, m.imag
d, p = la.schur(np.block([[-a, b], [b, a]]))
pos_eigenval_positions = np.diag(d) > 0
u = p[n:, pos_eigenval_positions] + 1j * p[:n, pos_eigenval_positions]
return la.pinv(u).T
def is_diag(m: np.ndarray) -> bool:
return np.abs(m - np.diag(np.diag(m))).max() < 1e-8
table = Table(title="UU^T Whack-a-Mole")
table.add_column("Unitary")
table.add_column("X^T UU^T X Diagonal?")
for name, unitary in [
("cnot", np.array([[1, 0, 0, 0], [0, 0, 0, 1], [0, 0, 1, 0], [0, 1, 0, 0]])),
("cphase", np.diag([1, 1, 1, 1j])),
("kron", np.kron(np.eye(2), 1 / np.sqrt(2) * np.array([[1, -1j], [-1j, 1]]))),
("latest", latest),
]:
a, b = unitary.real, unitary.imag
x = decomp_via_takagi(unitary)
table.add_row(name, str(is_diag(x.T @ unitary @ x)))
print(table)
|
@genos good sleuthing! looking into it |
Here's hoping some randomized testing will give us confidence… import numpy as np
from rich import print
from rich.progress import track
import scipy.linalg as la
from scipy.stats import unitary_group
import typer
def decomp_via_takagi(m: np.ndarray) -> np.ndarray:
n = m.shape[0]
a, b = m.real, m.imag
d, p = la.schur(np.block([[-a, b], [b, a]]))
pos_eigenval_positions = np.diag(d) > 0
u = p[n:, pos_eigenval_positions] + 1j * p[:n, pos_eigenval_positions]
return la.pinv(u).T
def assert_is_diag(m: np.ndarray):
assert (d := np.abs(m - np.diag(np.diag(m))).max()) < 1e-8, f"Max-abs off-diag: {d}"
def test_su4(iterations: int, seed: int):
rng = unitary_group(dim=4, seed=seed)
for _ in track(range(iterations), description="[cyan]Testing SU(4)…[/cyan]"):
u = rng.rvs()
uut = u @ u.T
x = decomp_via_takagi(uut)
assert_is_diag(x.T @ uut @ x)
def test_su2_x_su2(iterations: int, seed: int):
rng = unitary_group(dim=2, seed=seed)
for _ in track(range(iterations), description="[cyan]Testing SU(2)⊗SU(2)…[/cyan]"):
u = np.kron(rng.rvs(), rng.rvs())
uut = u @ u.T
x = decomp_via_takagi(uut)
assert_is_diag(x.T @ uut @ x)
def main(
iterations: int = typer.Option(100_000, help="Number of tests to run"),
su4_seed: int = typer.Option(96692877, help="Seed for SU(4) tests (random.org)"),
su2_seed: int = typer.Option(29676226, help="Seed for SU(2) tests (random.org)"),
):
test_su4(iterations, su4_seed)
test_su2_x_su2(iterations, su2_seed)
print("[bold][green]Passed![/green][/bold]")
if __name__ == "__main__":
typer.run(main) |
@genos just added Will try to add the above algo and see how it goes. |
@genos, the line pos_eigenval_positions = np.diag(d) > 0
u = p[n:, pos_eigenval_positions] + 1j * p[:n, pos_eigenval_positions] Is this saying that if the Are we guaranteed the number of positive eigenvalues is |
@genos So, as is sometimes typical with me, I started implementing things without thinking about it too deeply first. Only after quilc tests started failing did I think about it. :S Don't we have a problem with Takagi? It just says that if we have a symmetric matrix
We need a |
|
Wait, if we're looking for a decomposition of a symmetric unitary import numpy as np
from rich import print
from rich.progress import track
from scipy.stats import unitary_group
import typer
def decomp_via_eigendecomp(uut: np.ndarray) -> np.ndarray:
return np.linalg.eig(uut)[1]
def _test(u: np.ndarray):
uut = u @ u.T
x = decomp_via_eigendecomp(uut)
m = x.T @ uut @ x
assert (d := np.abs(m - np.diag(np.diag(m))).max()) < 1e-8, f"Max-abs off-diag: {d}"
def test_su4(iterations: int, seed: int):
rng = unitary_group(dim=4, seed=seed)
for _ in track(range(iterations), description="[cyan]Testing SU(4)…[/cyan]"):
_test(rng.rvs())
def test_su2_x_su2(iterations: int, seed: int):
rng = unitary_group(dim=2, seed=seed)
for _ in track(range(iterations), description="[cyan]Testing SU(2)⊗SU(2)…[/cyan]"):
_test(np.kron(rng.rvs(), rng.rvs()))
def main(
iterations: int = typer.Option(100_000, help="Number of tests to run"),
su4_seed: int = typer.Option(96692877, help="Seed for SU(4) tests (random.org)"),
su2_seed: int = typer.Option(29676226, help="Seed for SU(2) tests (random.org)"),
):
test_su4(iterations, su4_seed)
test_su2_x_su2(iterations, su2_seed)
print("[bold][green]Passed![/green][/bold]")
if __name__ == "__main__":
typer.run(main)
|
No, of course it's not that easy
|
@stylewarning I'm having trouble understanding why the following doesn't set the determinant of the returned diff --git a/src/compilers/approx.lisp b/src/compilers/approx.lisp
index 53abb86..de30a87 100644
--- a/src/compilers/approx.lisp
+++ b/src/compilers/approx.lisp
@@ -212,39 +212,13 @@
(magicl::map-to #'complex m cm)
cm))
-(defun takagi-decomposition-of-uu^t (u)
- "Given a unitary U, finds an X such that
-
- X^T (UU^T) X
-
-is a diagonal matrix. Return (VALUES X UU^T)."
+(defun decomposition-of-uu^t (u)
+ "Given a unitary U, finds an X such that X^T (UU^T) X is a diagonal matrix. Return (VALUES X UU^T)."
(let* ((uut (magicl:@ u (magicl:transpose u)))
- (n (magicl:nrows uut))
- (a (magicl:.realpart uut))
- (b (magicl:.imagpart uut))
- (m (magicl:block-matrix (list (magicl:map #'- a) b b a) '(2 2))))
- (multiple-value-bind (p d) (magicl:schur m)
- (let* ((positive-eigs (loop :for i :below (magicl:nrows d)
- :for e := (magicl:tref d i i)
- :when (and (double~ 0.0d0 (imagpart e))
- (plusp (realpart e)))
- :collect i))
- (diagonalizer (magicl:zeros (list n n) :type '(complex double-float))))
- (assert (= 4 (length positive-eigs))
- ()
- "Expected 4 positive eigenvalues. Got ~D." (length positive-eigs))
- (dotimes (row n)
- (loop :for col :below n
- :for from-col :in positive-eigs :do
- (setf (magicl:tref diagonalizer row col)
- (complex (magicl:tref p (+ n row) from-col)
- (magicl:tref p row from-col)))))
- (setf diagonalizer (magicl:transpose (magicl:inv diagonalizer)))
- (when *check-math*
- (assert (diagonal-matrix-p (magicl:@ (magicl:transpose diagonalizer)
- uut
- diagonalizer))))
- (values diagonalizer uut)))))
+ (x (nth-value 1 (magicl:eig uut)))
+ (det-x (magicl:det x))
+ (normalized-x (magicl:map #'(lambda (v) (/ v det-x)) x)))
+ (values normalized-x uut))) |
consider for example the diagonal matrix Suppose we have a different matrix We need both that |
Ugh, the (many) professors who tried to teach me (numerical) linear algebra are very disappointed in me |
@genos Maybe it's good to take a step back and re-evaluate what we might be solving by avoiding calculating eigenvalues in the current way. We've at least removed all non-determinism. I think the only benefit could be that we might get more numerical stability, but even for that I don't have a good argument. There are other things which show that we have some misunderstandings in the code, which may be good sources for fixing. For instance, when testing, we get:
which is something the code is expecting, yet we don't see follow-up failures from this violation. |
Some stats: Running the test suite in full, it takes
So it seems the current state of affairs might be OK, at least in terms of it doing what it's supposed to. |
Sorry for going at this in a rather headstrong fashion, I was under a deadline and wanted to try to squeeze out a result beforehand. Glad you’re taking a fresh look at things @stylewarning! When you say “the current state of affairs,” is that the PR branch or |
@genos ah yeah, I forgot about ARM completely. What's the state of ARM on master? |
I haven’t fuzzed the random state of the programs I was trying to generate, compile, and run, but #842 was uniformly throwing violent errors my way. |
@genos If I had an ARM machine, I would be happy to do some spelunking. Unfortunately I don't. :/ Actually, the last error you posted in the thread is somewhat hopeful, it's talking about a diagonal matrix of If I had to put money on a particular thing being a problem, I'd say it might be a different in how |
@stylewarning I’d charged ahead on this so hard, I hadn’t focused on the thing that brought me here in the first place! Though I’d be embarrassed if it turned out to be that simple a fix, if all we needed was some special case handling around “is this already diagonal?” I’d be much obliged (and humbled) |
After my myriad false starts I hesitate to be even cautiously optimistic, but: using the Schur decomposition of import numpy as np
from rich import print
from rich.progress import track
import scipy.linalg as la
from scipy.stats import unitary_group
import typer
def decomp(u: np.ndarray) -> np.ndarray:
_, z = la.schur(u @ u.T)
return z / la.eigvals(z)
def _test(u: np.ndarray):
x = decomp(u)
uut = u @ u.T
m = x.T @ uut @ x
_, n = u.shape
np.testing.assert_allclose(la.det(x), 1, err_msg="det(X) ≠ 1")
np.testing.assert_equal(
np.any(np.not_equal(x, u)), True, err_msg="Just returned X = U"
)
assert np.abs(m - np.diag(np.diag(m))).max() < 1e-10, "X^T U U^T X not diagonal"
def test_specifics():
for unitary in track(
[
np.array([[1, 0, 0, 0], [0, 0, 0, 1], [0, 0, 1, 0], [0, 1, 0, 0]]),
np.diag([1, 1, 1, 1j]),
np.kron(np.eye(2), 1 / np.sqrt(2) * np.array([[1, -1j], [-1j, 1]])),
np.array(
[
[-0.965 + 0.133j, 0.093 + 0.000j, -0.000 + 0.000j, -0.000 + 0.206j],
[0.093 + 0.000j, 0.965 + 0.133j, 0.000 - 0.206j, -0.000 + 0.000j],
[-0.000 + 0.000j, 0.000 - 0.206j, 0.965 - 0.133j, -0.093 + 0.000j],
[
-0.000 + 0.206j,
-0.000 + 0.000j,
-0.093 + 0.000j,
-0.965 - 0.133j,
],
]
),
],
description="[cyan]Testing specific examples…[/cyan]",
):
_test(unitary)
def test_su4(iterations: int, seed: int):
rng = unitary_group(dim=4, seed=seed)
for _ in track(range(iterations), description="[cyan]Testing SU(4)…[/cyan]"):
_test(rng.rvs())
def test_su2_x_su2(iterations: int, seed: int):
rng = unitary_group(dim=2, seed=seed)
for _ in track(range(iterations), description="[cyan]Testing SU(2)⊗SU(2)…[/cyan]"):
_test(np.kron(rng.rvs(), rng.rvs()))
def main(
iterations: int = typer.Option(100_000, help="Number of tests to run"),
su4_seed: int = typer.Option(96692877, help="Seed for SU(4) tests (random.org)"),
su2_seed: int = typer.Option(29676226, help="Seed for SU(2) tests (random.org)"),
):
test_specifics()
test_su4(iterations, su4_seed)
test_su2_x_su2(iterations, su2_seed)
print("[bold][green]Passed![/green][/bold]")
if __name__ == "__main__":
typer.run(main) Though I run into problems when trying to make a similar change in the diff --git a/src/compilers/approx.lisp b/src/compilers/approx.lisp
index 53abb86..f1faee1 100644
--- a/src/compilers/approx.lisp
+++ b/src/compilers/approx.lisp
@@ -212,46 +212,30 @@
(magicl::map-to #'complex m cm)
cm))
-(defun takagi-decomposition-of-uu^t (u)
+(defun decomposition-of-uu^t (u)
"Given a unitary U, finds an X such that
X^T (UU^T) X
is a diagonal matrix. Return (VALUES X UU^T)."
- (let* ((uut (magicl:@ u (magicl:transpose u)))
- (n (magicl:nrows uut))
- (a (magicl:.realpart uut))
- (b (magicl:.imagpart uut))
- (m (magicl:block-matrix (list (magicl:map #'- a) b b a) '(2 2))))
- (multiple-value-bind (p d) (magicl:schur m)
- (let* ((positive-eigs (loop :for i :below (magicl:nrows d)
- :for e := (magicl:tref d i i)
- :when (and (double~ 0.0d0 (imagpart e))
- (plusp (realpart e)))
- :collect i))
- (diagonalizer (magicl:zeros (list n n) :type '(complex double-float))))
- (assert (= 4 (length positive-eigs))
- ()
- "Expected 4 positive eigenvalues. Got ~D." (length positive-eigs))
- (dotimes (row n)
- (loop :for col :below n
- :for from-col :in positive-eigs :do
- (setf (magicl:tref diagonalizer row col)
- (complex (magicl:tref p (+ n row) from-col)
- (magicl:tref p row from-col)))))
- (setf diagonalizer (magicl:transpose (magicl:inv diagonalizer)))
- (when *check-math*
- (assert (diagonal-matrix-p (magicl:@ (magicl:transpose diagonalizer)
- uut
- diagonalizer))))
- (values diagonalizer uut)))))
+ (let* ((u-u^t (magicl:@ u (magicl:transpose u)))
+ (z (nth-value 0 (magicl:schur u-u^t)))
+ (eig-vals-z (nth-value 0 (magicl:eig z)))
+ (n (first (magicl:shape u)))
+ (eig-vals-mat (magicl:from-list
+ (loop :with m = '()
+ :for i :below n
+ :do (setf m (append m eig-vals-z)) :finally (return m))
+ (list n n)))
+ (x (magicl:./ z eig-vals-mat)))
+ (values x u-u^t)))
(defun find-diagonalizer-in-e-basis (m)
"For M in SU(4), compute an SO(4) column matrix of eigenvectors of E^* M E (E^* M E)^T."
(check-type m magicl:matrix)
(assert (magicl:unitary-matrix-p m))
(let ((u (magicl:@ +edag-basis+ m +e-basis+)))
- (multiple-value-bind (evecs gammag) (takagi-decomposition-of-uu^t u)
+ (multiple-value-bind (evecs gammag) (decomposition-of-uu^t u) I receive the following complaint with
|
The Gram-Schmidt process in - (assert (magicl:every #'double~
- (eye 4 :type 'double-float)
- (magicl:@ (magicl:transpose evecs)
- evecs))
+ (assert (magicl:unitary-matrix-p evecs)
(evecs)
- "The calculated eigenvectors were not found to be orthonormal. ~
- EE^T =~%~A"
- (magicl:@ (magicl:transpose evecs)
- evecs)))
+ "The calculated eigenvectors were not found to be unitary. ~
+ EE^† =~%~A"
+ (magicl:@ (magicl:dagger evecs) evecs))) leads to a different failure later on
So perhaps we do in fact want |
EDIT This issue was previously about using generalized Schur to re-implement
F-D-I-E-B
, but it turned out to not be e a working approach. As such, I'm changing this issue into a discussion about how we might approach doing so.I'm mostly interpreting some things @kilimanjaro told me (so credit goes to him), though errors I may have below are my own.
find-diagonalizer-in-e-basis
aims to diagonalize a symmetric unitary matrixgammag = u u^T
in terms of an orthogonal matrix of eigenvectors. Normally, all that the spectral theorem gives you (in this context) is that you can do this in terms of a unitary matrix of eigenvectors. In this case, however, the real and imaginary parts ofgammag
commute, so it's sufficient to simultaneously diagonalize them. They are real, symmetric matrices, their eigenvectors will be real and will give an orthogonal matrixTo simultaneously diagonalize a pair of commuting matrices, it's not quite enough to just compute eigenvectors and eigenvalues of one and then hope that this works for the other (consider that the identity matrix, which commutes with anything, but we need to write its eigenspaces as spanned by eigenvectors of some other matrix.) The current QUILC approach is to try to diagonalize a linear combination of the real and imaginary parts. Since there could be some relationships between them that we don't know, the current code tries to pick a random combination, and just repeats until we get something that works
This problem has a standard solution that linear algebra libraries implement, called the generalized Schur decomposition. LAPACK documents it hereA Python implementation (which uses NumPy'sqz
) supplied by @kilimanjaro is here:qz
into MAGICL.The text was updated successfully, but these errors were encountered: