Special Matrices: Triangular, Symmetric, Diagonal

We have seen that a matrix is a block of entries or two dimensional data. The size of the matrix is given by the number of rows and the number of columns. If the two numbers are the same, we called such matrix a square matrix.

To square matrices we associate what we call the main diagonal (in short the diagonal). Indeed, consider the matrix

\begin{displaymath}A= \left(\begin{array}{cc}
a&b\\
c&d\\
\end{array}\right).\end{displaymath}

Its diagonal is given by the numbers a and d. For the matrix

\begin{displaymath}A = \left(\begin{array}{ccc}
a&b&c\\
d&e&f\\
g&h&k\\
\end{array}\right)\end{displaymath}

its diagonal consists of a, e, and k. In general, if A is a square matrix of order n and if aij is the number in the ith-row and jth-colum, then the diagonal is given by the numbers aii, for i=1,..,n.

The diagonal of a square matrix helps define two type of matrices: upper-triangular and lower-triangular. Indeed, the diagonal subdivides the matrix into two blocks: one above the diagonal and the other one below it. If the lower-block consists of zeros, we call such a matrix upper-triangular. If the upper-block consists of zeros, we call such a matrix lower-triangular. For example, the matrices

\begin{displaymath}\left(\begin{array}{cc}
a&b\\
0&d\\
\end{array}\right)\;\mb...
...begin{array}{ccc}
a&b&c\\
0&e&f\\
0&0&k\\
\end{array}\right)\end{displaymath}

are upper-triangular, while the matrices

\begin{displaymath}\left(\begin{array}{cc}
a&0\\
c&d\\
\end{array}\right)\;\mb...
...begin{array}{ccc}
a&0&0\\
d&e&0\\
g&h&k\\
\end{array}\right)\end{displaymath}

are lower-triangular. Now consider the two matrices

\begin{displaymath}A = \left(\begin{array}{ccc}
a&0&0\\
d&e&0\\
g&h&k\\
\end{...
...egin{array}{ccc}
a&d&g\\
0&e&h\\
0&0&k\\
\end{array}\right).\end{displaymath}

The matrices A and B are triangular. But there is something special about these two matrices. Indeed, as you can see if you reflect the matrix A about the diagonal, you get the matrix B. This operation is called the transpose operation. Indeed, let A be a nxm matrix defined by the numbers aij, then the transpose of A, denoted AT is the mxn matrix defined by the numbers bij where bij = aji. For example, for the matrix

\begin{displaymath}A = \left(\begin{array}{ccc}
a&b&c\\
d&e&f\\
g&h&k\\
t&r&s\\
\end{array}\right)\end{displaymath}

we have

\begin{displaymath}A^{T} = \left(\begin{array}{cccc}
a&d&g&t\\
b&e&h&r\\
c&f&k&s\\
\end{array}\right).\end{displaymath}

Properties of the Transpose operation. If X and Y are mxn matrices and Z is an nxk matrix, then

1.
(X+Y)T = XT + YT
2.
(XZ)T = ZT XT
3.
(XT)T = X

A symmetric matrix is a matrix equal to its transpose. So a symmetric matrix must be a square matrix. For example, the matrices

\begin{displaymath}\left(\begin{array}{cc}
a&b\\
b&c\\
\end{array}\right)\;\mb...
...begin{array}{ccc}
a&b&c\\
b&d&e\\
c&e&f\\
\end{array}\right)\end{displaymath}

are symmetric matrices. In particular a symmetric matrix of order n, contains at most $\displaystyle \frac{n(n+1)}{2}$ different numbers.

A diagonal matrix is a symmetric matrix with all of its entries equal to zero except may be the ones on the diagonal. So a diagonal matrix has at most n different numbers other than 0. For example, the matrices

\begin{displaymath}\left(\begin{array}{cc}
a&0\\
0&b\\
\end{array}\right)\;\mb...
...begin{array}{ccc}
a&0&0\\
0&0&0\\
0&0&b\\
\end{array}\right)\end{displaymath}

are diagonal matrices. Identity matrices are examples of diagonal matrices. Diagonal matrices play a crucial role in matrix theory. We will see this later on.

Example. Consider the diagonal matrix

\begin{displaymath}A = \left(\begin{array}{cc}
a&0\\
0&b\\
\end{array}\right).\end{displaymath}

Define the power-matrices of A by

\begin{displaymath}A^0 = I_2, \; A^1 = A,\; A^2 = A A, \; A^3 = A A A\;\; \mbox{etc..}\end{displaymath}

Find the power matrices of A and then evaluate the matrices

\begin{displaymath}I_2 + \frac{1}{1!}A + \frac{1}{2!}A^2 + \cdots + \frac{1}{n!} A^n\end{displaymath}

for n=1,2,....

Answer. We have

\begin{displaymath}A^2 = \left(\begin{array}{cc}
a&0\\
0&b\\
\end{array}\right...
... = \left(\begin{array}{cc}
a^2&0\\
0&b^2\\
\end{array}\right)\end{displaymath}

and

\begin{displaymath}A^3 = A^2 A = \left(\begin{array}{cc}
a^2&0\\
0&b^2\\
\end{...
...= \left(\begin{array}{cc}
a^3&0\\
0&b^3\\
\end{array}\right).\end{displaymath}

By induction, one may easily show that

\begin{displaymath}A^n = \left(\begin{array}{cc}
a^n&0\\
0&b^n\\
\end{array}\right)\end{displaymath}

for every natural number n. Then we have

\begin{displaymath}I_2 + \frac{1}{1!}A + \frac{1}{2!}A^2 + \cdots + \frac{1}{n!}...
... \frac{b^2}{2!} + \cdots + \frac{b^n}{n!}\\
\end{array}\right)\end{displaymath}

for n=1,2,..

Scalar Product. Consider the 3x1 matrices

\begin{displaymath}X= \left(\begin{array}{c}
a\\
b\\
c\\
\end{array}\right)\;...
...egin{array}{c}
\alpha \\ \beta \\ \gamma\\
\end{array}\right).\end{displaymath}

The scalar product of X and Y is defined by

\begin{displaymath}X^{T}Y = \left(\begin{array}{ccc}
a&b&c\\
\end{array}\right)...
...
\gamma\\
\end{array}\right) = a \alpha + b \beta + c \gamma .\end{displaymath}

In particular, we have

XTX = (a2 + b2 + c2). This is a 1 x 1 matrix .

[Geometry] [Algebra] [Trigonometry ]
[Calculus] [Differential Equations] [Matrix Algebra]

S.O.S MATH: Home Page

Do you need more help? Please post your question on our S.O.S. Mathematics CyberBoard.

Author: M.A. Khamsi

Copyright 1999-2017 MathMedics, LLC. All rights reserved.
Contact us
Math Medics, LLC. - P.O. Box 12395 - El Paso TX 79913 - USA
users online during the last hour