Click here to edit contents of this page.

Click here to toggle editing of individual sections of the page (if possible). Watch headings for an "edit" link when available.

Append content without editing the whole page source.

Check out how this page has evolved in the past.

If you want to discuss contents of this page - this is the easiest way to do it.

View and manage file attachments for this page.

A few useful tools to manage this Site.

See pages that link to and include this page.

Change the name (also URL address, possibly the category) of the page.

View wiki source for this page without editing.

View/set parent page (used for creating breadcrumbs and structured layout).

Notify administrators if there is objectionable content in this page.

Something does not work as expected? Find out what you can do.

General Wikidot.com documentation and help section.

Wikidot.com Terms of Service - what you can, what you should not etc.

Wikidot.com Privacy Policy.

We know that in general a projection operator from an n-dimesion vector space V onto an s-dimensional subspace S with basis ${|\alpha_0\rangle ,\ldots, |\alpha_{s-1}\rangle}$ is defined as $P_s = \sum_{i=1}^{s-1} |\alpha_i\rangle\langle\alpha_i|$. From this, we can see that the matrix representation of the projection operator must be a symmetric matrix. Furthermore, we can see that the coefficient of each element must be real for all bases in S (due to the fact that all bases can be written as a linear combination of the standard bases), and 0 for all standard bases in V that are not in S. Thus, all of the elements in the matrix form of P are real and their own complex conjugate. Because P is a symmetric matrix, it is its own transpose. Combining these two facts, we see that P must be its own adjoint.

ReplyOptionsI can't readily "see" the inferences in the above reasoning, so I'm going to work out what the matrix of a projection operator looks like.

Consider a basis vector $\ket{\alpha_i}$ of $S$, expressed in the standard basis as

$\ket{\alpha_i} = a_0\ket{0} + a_1\ket{1} + \dots + a_n\ket{n}$

To simplify the presentation, instead of the "full" $P_s$, let's consider the matrix of each individual outer product $A = \ket{\alpha_i} \bra{\alpha_i}$. An element of $A$ has the form

$A_{jk} = a_j\overline{a_k}$

An element of its conjugate transpose $A^{\dagger}$

$A^{\dagger}_{jk} = \overline{A_{kj}} = \overline{a_k\overline{a_j}} = \overline{a_k} a_j = a_j \overline{a_k} = A_{jk}$

Thus, $A$ is its own conjugate transpose and $P_s$, which is a sum of all $A$s, is also its own conjugate transpose.

Note that this does not mean that the matrix of $P_s$ or any of the $A$s is symmetric and real. Only the diagonal elements of $A$s and $P_s$ are necessarily real because $a_i\overline{a_k}$ is real if $i = k$.

ReplyOptions