You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Let $\vec{x} = \begin{bmatrix}1\\ -2\\1\\6\end{bmatrix}$ in $\RR^4$, and let $W = \mbox{span}\left(\begin{bmatrix}2\\1\\3\\ -4\end{bmatrix}, \begin{bmatrix}1\\2\\0\\1\end{bmatrix}\right)$.
What we observed above, holds in general. We will use this fact to find $\vec{z}$.
94
94
95
-
Every vector in $\text{col}(A)$ can be written in the form $A\vec{x}$ for some $\vec{x}$ in $\RR^m$. Our goal is to find $\vec{z}$ such that $A\vec{z}$ is the orthogonal projection of $\vec{b}$ onto $\text{col}(A)$. By Corollary \ref{cor:orthProjOntoW}, every vector $A\vec{x}$ in $\text{col}(A)$ is orthogonal to $\vec{b}-A\vec{z}$. This means $\vec{b}-A\vec{z}$ is in the orthogonal complement of $\text{col}(A)$, which is $\text{null}(A^T)$.
95
+
Every vector in $\text{col}(A)$ can be written in the form $A\vec{x}$ for some $\vec{x}$ in $\RR^m$. Our goal is to find $\vec{z}$ such that $A\vec{z}$ is the orthogonal projection of $\vec{b}$ onto $\text{col}(A)$. By Corollary \ref{cor:orthProjOntoW}, every vector $A\vec{x}$ in $\text{col}(A)$ is orthogonal to $\vec{b}-A\vec{z}$. %This means $\vec{b}-A\vec{z}$ is in the orthogonal complement of $\text{col}(A)$, which is $\text{null}(A^T)$.
Find a linear function of best fit for each of the following sets of data points. Examine how well your line fits the points by typing the equation of the line into the Desmos window.
424
421
425
422
\begin{problem}\label{prob:leastSq2a}
423
+
Find a linear function of best fit for the given set of data points. Examine how well your line fits the points by typing the equation of the line into the Desmos window.
Find a linear function of best fit for the given set of data points. Examine how well your line fits the points by typing the equation of the line into the Desmos window.
For more information about doing least squares with Octave, please see \href{https://ximera.osu.edu/linearalgebradzv3/xOctave/OCT_orth/main}{Octave for Chapter 9}.
451
+
For more information about doing least squares with Octave, please see \href{https://ximera.osu.edu/corela/xOctave/OCT_orth/main}{Octave for Chapter 9}.
453
452
\end{problem}
454
453
455
-
\begin{problem}\label{ex:5_6_14}
456
-
If $A$ is an $m \times n$ matrix, it can be proved that there exists a unique $n \times m$ matrix $A^{\#}$ satisfying the following four conditions: $AA^{\#}A = A$; $A^{\#}AA^{\#} = A^{\#}$; $AA^{\#}$ and $A^{\#}A$ are symmetric. The matrix $A^{\#}$ is called the \emph{Moore-Penrose} inverse.
454
+
%\begin{problem}\label{ex:5_6_14}
455
+
%If $A$ is an $m \times n$ matrix, it can be proved that there exists a unique $n \times m$ matrix $A^{\#}$ satisfying the following four conditions: $AA^{\#}A = A$; $A^{\#}AA^{\#} = A^{\#}$; $AA^{\#}$ and $A^{\#}A$ are symmetric. The matrix $A^{\#}$ is called the \emph{Moore-Penrose} inverse.
457
456
458
-
\begin{enumerate}
459
-
\item If $A$ is square and invertible, show that $A^{\#} = A^{-1}$.
457
+
%\begin{enumerate}
458
+
%\item If $A$ is square and invertible, show that $A^{\#} = A^{-1}$.
460
459
461
-
\item If $\text{rank} A = m$, show that $A^{\#} = A^{T}(AA^{T})^{-1}$.
460
+
%\item If $\text{rank} A = m$, show that $A^{\#} = A^{T}(AA^{T})^{-1}$.
462
461
463
-
\item If $\text{rank} A = n$, show that $A^{\#} = (A^{T}A)^{-1}A^{T}$. (Notice the appearance of the Moore-Penrose inverse arrived when we solve the normal equations, arriving at Equation (\ref{eq:leastSquaresZ})).
462
+
%\item If $\text{rank} A = n$, show that $A^{\#} = (A^{T}A)^{-1}A^{T}$. (Notice the appearance of the Moore-Penrose inverse arrived when we solve the normal equations, arriving at Equation (\ref{eq:leastSquaresZ})).
464
463
465
-
\end{enumerate}
466
-
\end{problem}
464
+
%\end{enumerate}
465
+
%\end{problem}
467
466
468
467
469
468
% In many scientific investigations, data are collected that relate two variables. For example, if $x$ is the
Copy file name to clipboardExpand all lines: RTH-0035/main.tex
+85-74Lines changed: 85 additions & 74 deletions
Original file line number
Diff line number
Diff line change
@@ -21,7 +21,7 @@ \section*{Orthogonal Matrices and Symmetric Matrices}
21
21
As we have seen, the nice bases of $\RR^n$ are the orthogonal ones, so a natural question is: which $n \times n$ matrices have $n$ orthogonal eigenvectors, so that columns of $P$ form an orthogonal basis for $\RR^n$? These turn out to be precisely the \dfn{symmetric matrices} (matrices for which $A=A^T$), and this is the main result of this section.
22
22
23
23
\section*{Orthogonal Matrices}
24
-
Recall that an orthogonal set of vectorsis called \dfn{orthonormal} if $\norm{\vec{q}} = 1$ for each vector $\vec{q}$ in the set, and that any orthogonal set$\{\vec{v}_{1}, \vec{v}_{2}, \dots, \vec{v}_{k}\}$ can be ``\textit{normalized}'', i.e. converted into an orthonormal set $\left\{\frac{1}{\norm{\vec{v}_{1}}}\vec{v}_{1}, \frac{1}{\norm{\vec{v}_{2}}}\vec{v}_{2}, \dots, \frac{1}{\norm{\vec{v}_{k}}}\vec{v}_{k} \right\}$. In particular, if a matrix $A$ has $n$ orthogonal eigenvectors, they can (by normalizing) be taken to be orthonormal. The corresponding diagonalizing matrix (we will use $Q$ instead of $P$) has orthonormal columns, and such matrices are very easy to invert.
24
+
A collection of non-zero, pairwise orthogonal vectors in $\RR^n$ is called an \dfn{orthogonal} set of vectors. An orthogonal set of vectors is called \dfn{orthonormal} if $\norm{\vec{q}} = 1$ for each vector $\vec{q}$ in the set. A set of orthogonal vectors$\{\vec{v}_{1}, \vec{v}_{2}, \dots, \vec{v}_{k}\}$ can be ``\textit{normalized}'', i.e. converted into an orthonormal set $\left\{\frac{1}{\norm{\vec{v}_{1}}}\vec{v}_{1}, \frac{1}{\norm{\vec{v}_{2}}}\vec{v}_{2}, \dots, \frac{1}{\norm{\vec{v}_{k}}}\vec{v}_{k} \right\}$. In particular, if a matrix $A$ has $n$ orthogonal eigenvectors, they can (by normalizing) be taken to be orthonormal. The corresponding diagonalizing matrix (we will use $Q$ instead of $P$) has orthonormal columns, and such matrices are very easy to invert.
25
25
26
26
27
27
\begin{theorem}\label{th:orthogonal_matrices}
@@ -263,7 +263,7 @@ \section*{Symmetric Matrices}
263
263
\end{proof}
264
264
265
265
266
-
Because the eigenvalues of a real symmetric matrix are real, Theorem~\ref{th:PrinAxes} is also called the \dfn{Real Spectral Theorem}, and the set of distinct eigenvalues is called the \dfn{spectrum} of the matrix. A similar result holds for matrices with complex entries (Theorem \ref{th:025890}).
266
+
Because the eigenvalues of a real symmetric matrix are real, Theorem~\ref{th:PrinAxes} is called the \dfn{Real Spectral Theorem}, and the set of distinct eigenvalues is called the \dfn{spectrum} of the matrix. A similar result holds for matrices with complex entries (Theorem \ref{th:025890}).
Hence the distinct eigenvalues are $0$ and $9$ are of algebraic multiplicity $1$ and $2$, respectively. The geometric multiplicities must be the same, for $A$ is diagonalizable, being symmetric. It follows that $\mbox{dim}(\mathcal{S}_0) = 1$ and $\mbox{dim}(\mathcal{S}_9) = 2$. Gaussian elimination gives
383
383
\begin{equation*}
@@ -395,7 +395,7 @@ \section*{Symmetric Matrices}
395
395
1
396
396
\end{bmatrix} \right\rbrace
397
397
\end{equation*}
398
-
The eigenvectors in $\mathcal{S}_{9}$ are both orthogonal to $\vec{x}_{1}$ as Theorem~\ref{th:symmetric_has_ortho_ev} guarantees, but not to each other. However, the Gram-Schmidt process yields an orthogonal basis
398
+
The eigenvectors in $\mathcal{S}_{9}$ are both orthogonal to $\vec{x}_{1}$ as Theorem~\ref{th:symmetric_has_ortho_ev} guarantees, but not to each other. However, an orthogonal basis can be found using projections. (See \href{https://ximera.osu.edu/corela/LinearAlgebraInteractiveIntro/RTH-0015/main}{Gram-Schmidt Orthogonalization})
399
399
\begin{equation*}
400
400
\{\vec{f}_{2}, \vec{f}_{3}\}\mbox{ of } \mathcal{S}_{9}(A) \quad\mbox{ where } \quad\vec{f}_{2} = \begin{bmatrix}
Suppose $A$ is orthogonally diagonalizable. Prove that $A$ is symmetric. (This is the easy direction of the "if and only if" in Theorem \ref{th:PrinAxes}.)
Suppose $A$ is orthogonally diagonalizable. Prove that $A$ is symmetric. (This is the easy direction of the "if and only if" in Theorem \ref{th:PrinAxes}.)
597
+
\end{problem}
587
598
588
599
\begin{problem}\label{ex:8_2_15}
589
600
Prove the converse of Theorem~\ref{th:dotpSymmetric}:
A matrix that we obtain from the identity matrix by writing its rows in a different order is called a \dfn{permutation matrix} (see Theorem \ref{th:LUPA}). Show that every permutation matrix is orthogonal.
643
-
\end{problem}
652
+
%\begin{problem}\label{prob:ortho21}
653
+
%A matrix that we obtain from the identity matrix by writing its rows in a different order is called a \dfn{permutation matrix} (see Theorem \ref{th:LUPA}). Show that every permutation matrix is orthogonal.
654
+
%\end{problem}
644
655
645
656
646
-
\begin{problem}\label{prob:ortho25}
647
-
Show that the following are equivalent for an $n \times n$ matrix $Q$.
657
+
%\begin{problem}\label{prob:ortho25}
658
+
%Show that the following are equivalent for an $n \times n$ matrix $Q$.
648
659
649
660
650
-
\begin{enumerate}
651
-
\item$Q$ is orthogonal.
661
+
%\begin{enumerate}
662
+
%\item $Q$ is orthogonal.
652
663
653
-
\item$\norm{Q\vec{x}} = \norm{\vec{x}}$ for all $\vec{x}\in\RR^n$.
664
+
%\item $\norm{Q\vec{x}} = \norm{\vec{x}}$ for all $\vec{x}\in\RR^n$.
654
665
655
-
\item$\norm{ Q\vec{x} - Q\vec{y}} = \norm{\vec{x} - \vec{y}}$ for all $\vec{x}$, $\vec{y}\in\RR^n$.
666
+
%\item $\norm{ Q\vec{x} - Q\vec{y}} = \norm{\vec{x} - \vec{y}}$ for all $\vec{x}$, $\vec{y}\in \RR^n$.
656
667
657
-
\item$(Q\vec{x}) \dotp (Q\vec{y}) = \vec{x} \dotp\vec{y}$ for all columns $\vec{x}$, $\vec{y}\in\RR^n$.
668
+
%\item $(Q\vec{x}) \dotp (Q\vec{y}) = \vec{x} \dotp \vec{y}$ for all columns $\vec{x}$, $\vec{y}\in\RR^n$.
658
669
659
-
\begin{hint}
660
-
%For (c) $\Rightarrow$ (d), see Exercise \ref{ex:5_3_14}(a).
661
-
For (d) $\Rightarrow$ (a), show that column $i$ of $Q$ equals $Q\vec{e}_{i}$, where $\vec{e}_{i}$ is column $i$ of the identity matrix.
662
-
\end{hint}
663
-
\end{enumerate}
670
+
%\begin{hint}
671
+
% %For (c) $\Rightarrow$ (d), see Exercise \ref{ex:5_3_14}(a).
672
+
%For (d) $\Rightarrow$ (a), show that column $i$ of $Q$ equals $Q\vec{e}_{i}$, where $\vec{e}_{i}$ is column $i$ of the identity matrix.
673
+
%\end{hint}
674
+
%\end{enumerate}
664
675
665
-
\begin{remark}
666
-
This exercise shows that linear transformations with orthogonal standard matrices are distance-preserving (b,c) and angle-preserving (d).
667
-
\end{remark}
676
+
%\begin{remark}
677
+
% This exercise shows that linear transformations with orthogonal standard matrices are distance-preserving (b,c) and angle-preserving (d).
678
+
%\end{remark}
668
679
669
-
\end{problem}
680
+
%\end{problem}
670
681
671
682
672
683
673
684
674
-
\begin{problem}\label{prob:SchurChallenge}
675
-
Modify the proof of Theorem~\ref{th:PrinAxes} to prove Theorem \ref{th:Schur}.
685
+
%\begin{problem}\label{prob:SchurChallenge}
686
+
%Modify the proof of Theorem~\ref{th:PrinAxes} to prove Theorem \ref{th:Schur}.
676
687
%If $A\vec{x}_{1} = \lambda_{1}\vec{x}_{1}$ where $\norm{\vec{x}_{1}} = 1$, let $\{\vec{x}_{1}, \vec{x}_{2}, \dots, \vec{x}_{n}\}$ be an orthonormal basis of $\RR^n$, and let $P_{1} = \begin{bmatrix}
677
688
%\vec{x}_{1} & \vec{x}_{2} & \cdots & \vec{x}_{n}
678
689
%\end{bmatrix}$. Then $P_{1}$ is orthogonal and $P_{1}^TAP_{1} = \begin{bmatrix}
@@ -686,7 +697,7 @@ \section*{Practice Problems}
686
697
%0 & T_{1}
687
698
%\end{bmatrix}$
688
699
% is upper triangular.
689
-
\end{problem}
700
+
%\end{problem}
690
701
691
702
\section*{Text Source} This section was adapted from Section 8.2 of Keith Nicholson's \href{https://open.umn.edu/opentextbooks/textbooks/linear-algebra-with-applications}{\it Linear Algebra with Applications}. (CC-BY-NC-SA)
0 commit comments