What we have done so far
 First week:
 Geometry of linear equations.
 Length (norm, magnitude) of a vector with n components. Dot product of two vectors.
 Elimination method. Cost of this algorithm.
 Singular and nonsingular systems.
 Matrix multiplication.
 Use of blocks in multiplication.
 Second week:
 Elementary matrices.
 LU decomposition (with no changes of rows).
 Use of LU in solving linear equations.
 Permutation matrices.
 PA=LU (with change of rows).
 Sign of a permutation.
 Uniqueness of LDU decomposition.
 Multiplying by a diagonal matrix from left (or right).
 Transpose of a matrix.
 Symmetric matrix.
 (AB)^{T}=B^{T}A^{T}.
 Third week:
 If LDU is symmetric, then L=U^{T}.
 Inverse of a matrix.
 Right inverse=left inverse (if both of them exist).
 Square matrix+ left inverse > invertible.
 P^{1}=P^{T} if P is a permutation martix.
 A^{1}=A^{T} if A is an orthogonal matrix.
 E_{ij}(c)^{1}= E_{ij}(c).
 (AB)^{1}=B^{1}A^{1} if both A and B are invertible.
 If AB and B are invertible, so is A.
 How to find inverse of a matrix. (only in one of the classes)
 If A is invertible, Ax=0 (column vectors) and yA=0 (row vectors) have unique vector solutions, x=0 and y=0, respectively.
 Necessary and sufficient condition for an uppertriangular matrix to be invertible. (Diagonal entries should be nonzero.)
 A is invertible if and only if A^{T} is invertible. Moreover if they are invertible, (A^{T})^{1}=(A^{1})^{T}.
 PA=LU; A is invertible if and ony if all the diagonal entries of U are nonzero.
 If A is not invertible, then there is a nonzero vector x such that Ax=0.
 Definition of a vector space.
 Examples of vector spaces from "linear" differential equations, e.g.
{f:[0,1] &rarr R  f is differentiable, f''(x)f'(x)f(x)=0}.
 Examples of vector spaces from "linear" recursive sequences, e.g.
{{a_{n}} 
∞ n=1 
a_{n}'s real numbers & a_{n+2}=a_{n+1}+a_{n} for any positive integer n}. 
 Linear transformation T_{A}(x)=Ax associated with a given matrix A.
 Image of T_{A}=the column space of A=linear combination of columns of A.
 Null space (or kernel) of A.
 Echelon form of A.
 Using elemination process to describe the column space and the null space of A.
 How to describe "preimage of a vector b under T_{A}", i.e. solutions of Ax=b.
 Forth Week:
 Reduced row echelon form of a matrix.
 How to describe solutions of Ax=b, using rref(A).
 Ax=b has either 0, 1, or infinitely many solutions (revisited).
 If x_{p} is a solution of Ax=b, then any soltuion of this system of linear equations is of the form x_{p}+x_{n}, where x_{n} is a vector in Null(A) the null space of A.
 If A is n by m and m>n, then Null(A) is nonzero, i.e. if the number of variables is more than the number of equations in Ax=0, then there is a nonzero solution.
 Let A be square matrix; then A is invertible if and only if Null(A)={0}.
 Rank(A)= number of leading 1's in the rref(A).
 Linearly independent vectors.
 v_{1}, v_{2}, ... , v_{n} are linearly independent if and only if the null space of A=[v_{1} ... v_{n}] is zero.
 Columns of an invertible matrix are linearly independent.
 Rows of an invertible matrix are linearly independent.
 Spanning sets.
 Columns of a matrix span the column space.
 Define a basis.
 dim (R^{n})=n.
 How to find a basis of the null space and the column space.
 Null(A)=Null(rref(A)).
 dim (column space of A)=rank(A).
 Fifth week:
 row space of A=row space of rref(A).
 dim (row space of A)= rank(A).
 rank(A)=rank(A^{T}).
 dim (left null space of A)= #rowsrank(A).
 dim (Null space of A)= #columnsrank(A).
 Let T:R^{n} &rarr R^{m} be a linear transformation; Then
dim(R^{n})=dim(ker(T))+dim(Im(T)).
 If A has a rightinverse, then the leftnull space of A is zero.
 If A has a leftinverse, then the null space of A is zero.
 Leftnull space of A is zero if and only if rank(A)= #rows.
 Null space of A is zero if and only if rank(A)= #columns.
 If rank(A)= #rows, then A has a rightinverse.
 The following statements are equivalent:
 A has a left inverse.
 N(A)={0}.
 rank(A)=# columns.
 If Ax=b has a solution, it is unique.
 The row space = R^{# columns}.
 For any b, x^{T}A=b^{T} has a solution.
 A^{T} has a right inverse.
 Any finite dimensional (real) vector space V can identified with R^{dim V}.
 Let B={v_{1},...,v_{n}} be a basis of V. For any v in V, let
[v]_{B}=[c_{1} ... c_{n}]^{T},
where v=c_{1} v_{1}+...+c_{n}v_{n}. Then [.]_{B}:V &rarr R^{n} is a 11 and surjective linear map.
 Let B_{1}={v_{1},...,v_{n}} and B_{2}={w_{1},...,w_{n}} be two basis of V. Then the following diagram is commutative:
where S=[[w_{1}]_{B1}...[w_{n}]_{B1}], i.e. [v]_{B1}=S [v]_{B2}.
 Let T be a linear transformation from V to W, B={v_{1},...,v_{n}} a basis of V, and B'={w_{1},...,w_{m}} a basis of W. Then the following diagram is commutative:
where A=[[T(v_{1})]_{B'}...[T(v_{n})]_{B'}], i.e. [T(v)]_{B'}=A [v]_{B}.
 Sixth week:
 The following diagram is commutative:
where S, S', A_{1}, and A_{2} are similar as above. It is the same as saying that
A_{2}=S'^{1} A_{1} S.
 Let T be a linear transformation from V to V. Let B_{1} and B_{2} be two basis of V. In the above setting, let W=V, B'_{1}=B_{1}, and B'_{2}=B_{2}; then
S'=S and A_{2}=S^{1} A_{1} S.
 We say that A is similar to B if A=S^{1} B S, for some S.
 Description of matrix of rotation in the standard basis.
 Description of matrix of projection onto a line and more generally onto a direction in R^{n} in the standard basis.
Proj_{a}(v)=(a.v/a.a)a=(1/a^{T}a) a (a^{T}v)=(1/a^{T}a) (a a^{T}) v.
So in the standard basis, its matrix is (1/a^{T}a) (a a^{T}), which is a symmetric matrix.
 Remark: since we used the dot product for the projection, we are writing v in the standard basis (or orthonormal basis which we will learn later).
 Description of matrix of reflection about a line which passes through the origin.
Ref_{L}(v)+v=2 Proj_{L}(v),
therefore
Ref_{L}(v)=2 Proj_{L}(v)v,
and we get what we wanted.
 Changing the standard basis and showing that the projection matrix is similar to
and the reflection matrix is similar to
 Othogonal complement of a subspace.
 Let V be a subspace of R^{n}; (V^{ ⊥})^{ ⊥}=V.
 Let V be a subspace of R^{n}; then R^{n}=V ⊕ V^{ ⊥}, i.e. any vector b in R^{n} can be written as a sum of a vector v in V and v^{ ⊥ } in V^{ ⊥ }, in a unique way.
 If b=v+v^{ ⊥ }, where v is in V and v^{ ⊥ } is in V^{ ⊥ }, then Proj_{V}(b)=v and Proj_{V ⊥ }(b)=v^{ ⊥ }.
 Proj_{V}+Proj_{V ⊥ }=id_{Rn}.
 Let V be a subspace of R^{n}, x and y two vectors in R^{n}; then x=Proj_{V}(y) if and only if
 x is in V,
 yx is in V^{ ⊥ }.
 C(A)^{⊥}=N(A^{T}). (Similarly, C(A^{T})=N(A)^{⊥}.)
 Let A be an n by m real matrix, T_{A} the associated linear transformation from R^{m} to R^{n}; then restriction of T_{A} is an isomorphism between the row space C(A^{T}) and the column space C(A), i.e. it is a linear map whose kernel is zero and whose range (image) is the codomain.
 Getting the best possible answer to an inconsistent system of linear equations Ax=b.
 Normal equation associated with a leastsquare problem: A^{T}A x=A^{T}b.
 N(A^{T}A)=N(A).
 C(A^{T}A)=C(A^{T}).
 If columns of A are linearly independent, then
 Axb is minimize at
x= (A^{T}A)^{1}A^{T} b.
 Matrix P of Proj_{C(A)} the orthogonal projection onto the column space of A in the standard basis is equal to
A(A^{T}A)^{1}A^{T}
 Seventh week:
 Linear least square method.
 The same method works for the other kind of expected functions, i.e. as long as we expect that our output is a linear combination of a finite set of given functions. For example, if we expect to get a polynomial of degree n, then A=[a_{ij}] such that a_{ij}=t_{i}^{j1}.
 If P^{2}=P and P^{T}=P, then it represents an orthogonal projection onto C(P).
 If P is projection onto a subspace V, then C(P)=V.
 We say that a map T:R^{n}→ R^{m} preserves the Euclidean structure if
 T(x)T(y)=xy for any x and y.
 Angle between the segments T(x)T(y) and T(y)T(z)= angle between the segments xy and yz.
 If T:R^{n}→ R^{m} is a linear map which preserves the Euclidean structure and Q is the matrix associated with T in the standard basis, then
 Columns of Q have length 1.
 Columns of Q are pairwise orhtogonal to each other.
 If Q is an m by n matrix such that
 Columns of Q have length 1 and
 Columns of Q are pairwise orhtogonal to each other,
then
 Q^{T}Q=I_{n},
 (Qx).(Qy)=x.y,
 Qx=x,
 ∠(Qx,Qy)=∠(x,y).
 Let T:R^{n}→R^{m} be a linear transformation. The following statements are equivalent:
 T(x)=x for any x.
 T(x).T(y)=x.y for any x and y.
 T(x)=x and ∠(T(x),T(y))=∠(x,y).
 Definition of orthonormal, orthonomal basis and orthogonal matrices.
 Q^{T}Q=I_{n} if and only if the columns of Q are orthonormal.
 If Q is a square matrix and its columns are orhtonormal, then its rows are also orthonormal.
 If q_{i}'s are nonzero and pairwise orhtogonal to each other, and
b=c_{1}q_{1}+...+c_{n}q_{n},
then
c_{i}=(q_{i}.b)/(q_{i}.q_{i}).
 If a_{i}'s are linear independent, how we can find the projection of a given vector onto the span of a_{i}'s. (Let A be a matrix whose columns are a_{i}'s and then use A(A^{T}A)^{1}A^{T}.)
 We can also find an orthonormal basis for the of span of a_{i}'s and then use dot product.
 If the columns of Q are orthonormal, then the matrix of projection onto the C(Q) in the standard basis is QQ^{T}.
 GramSchmidt process, i.e. let a_{i} be linearly independent vectors.
b_{j}:=a_{j}&Sigma 
j1 i=1 
(q_{i}.a_{j})q_{i}

 q_{j}=b_{j}/b_{j},
then q_{i}'s are an orthonomal basis of the span of a_{i}'s.
 How to use GramSchmidt to find an orthonormal basis of span of given vectors.
 If the columns of A are linearly independent, then A=QR, where Q has orthonormal columns and R is an uppertriangular matrix. (A is not necessarily a square matrix.)
 If columns of A are a_{i}'s, and q_{i}'s are the outcomes of the GramSchmidt process, then the for i at most j, the ij entry of R is q_{i}.a_{j}.
 Eighth week:
 What a ring is.
 det:M_{n}(R) → R s.t.
 det(I_{n})=1.
 Exchange of rows changes the sign.
 Linear w.r.t. the first row.
 Linear w.r.t. any row.
 Equal rows ⇒ det=0.
 (resticted) Row operations does not change.
 Zero row ⇒ det=0.
 A triangular ⇒ det(A)=product of its diagonal entries.
 A is singular iff det(A)=0.
 PA=LU ⇒ det(A)=sgn(P) product of diagonal entries of U.
 Product rule: det(AB)=det(A)det(B).
 det(A)=det(B) if A and B are similar.
 det(T) is welldefined when T:V→ V is a linear transformation.
 Transpose rule: det(A)=det(A^{T}).
 "Big formula"
det(A)=Σ 
s in S_{n} 
sgn(s) Π 
n i=1 
a_{is(i)}. 

det 
  
A 0 

B C 
  
= det(A) det(C). 
 A_{ij}: (n1) by (n1) matrix after deleting the i th row and the j th column of A.
 For any i between 1 and n,
det(A)=Σ 
n j=1 
(1)^{(i+j)} det(A_{ij}) a_{ij} 
 For any i and k between 1 and n, if k is not equal to i,
0=Σ 
n j=1 
(1)^{(i+j)} det(A_{ij}) a_{kj} 
 The ij entry of the adjoint of A is (1)^{(j+i)} det(A_{ji}).
 A adj(A)=det(A) I_{n}.
 Using transpose: adj(A) A=det(A) I_{n}.
 If A is invertible, A^{1}=adj(A)/det(A).
