Diagonalizing a matrix NOT having full rank, what does it mean?

This is going to be a quick intuition about what it means to diagonalize a matrix that does not have full rank (i.e. has null determinant).

Every matrix can be seen as a linear map between vector spaces. Stating that a matrix is similar to a diagonal matrix equals to stating that there exists a basis of the source vector space in which the linear transformation can be seen as a simple stretching of the space, as re-scaling the space. In other words, diagonalizing a matrix is the same as finding an orthogonal grid that is transformed in another orthogonal grid. I recommend this article from AMS for good visual representations of the topic.

Image Taken from AMS - We Recommend a Singular Value Decomposition
Taken from AMS – We Recommend a Singular Value Decomposition

Diagonalization on non full rank matrices

That’s all right – when we have a matrix from \mathbb{R}^3 in \mathbb{R}^3, if it can be diagonalized, we can find a basis in which the transformation is a re-scaling of the space, fine.

But what does it mean to diagonalize a matrix that has null determinant? The associated transformations have the effect of killing at least one dimension: indeed, a nxn matrix of rank k has the effect of lowering the output dimension by n-k. For example, a 3x3 matrix of rank 2 will have an image of size 2, instead of 3. This happens because two basis vectors are merged in the same vector in the output, so one dimension is bound to collapse.

Let’s consider the sample matrix

    \[A = \begin{bmatrix} 0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{bmatrix}\]

which has non full rank because has two equal rows. Indeed, one can check that the two vectors (1,0,0); (0,0,1) go in the same basis vector. This means that dim(Im(f_A))=2 instead of 3. In fact, it is common intuition that when the rank is not full, some dimensions are lost in the transformation. Even if it’s a 3x3 matrix, the output only has 2 dimensions. It’s like at the end of Inception when the 4D space in which cooper is floating gets shut.

However, A is also a symmetric matrix, so from the spectral theorem we know that it can be diagonalized. And now to the vital questions: what do we expect? What meaning does it have? Do we expect a basis of three vectors even if the map destroys one dimension?

Pause and ponder.

Continue reading “Diagonalizing a matrix NOT having full rank, what does it mean?”

Overdetermined and underdetermined systems of equations put simply

This article aims at providing real world examples and links for overdetermined and undetermined systems of equations. Before starting, we will suppose that all over and underdetermined systems are obtained from square systems which admit one and only one solution (i.e. comes from a coefficient matrix with non zero determinant).

Overdetermined systems

When a system of linear equations has more equations than unknowns, we say it is overdetermined. It means what it says: too many rules at once are being imposed, and some of them may be conflicting. Still, it is false to say that an overdetermined system does not have any solutions: it may or it may not.

Intuition

Intuitively, we can think of a system of equations as a set of requests. Imagine you have a group of n people in front of you (the unknowns), and you are supposed to give each person something specific to do. If you give more commands than the number of people, then we have an overdetermined system. It is clear that when this happens, at least one person must have received more than one command. However, there are two possible scenarios.

Continue reading “Overdetermined and underdetermined systems of equations put simply”

Quick method to find line of shortest distance for skew lines

In linear algebra it is sometimes needed to find the equation of the line of shortest distance for two skew lines. What follows is a very quick method of finding that line.

Let’s consider an example. Start with two simple skew lines:

s : \begin{cases} x = 1 + t \\ y = 0 \\ z = -t \end{cases}

r : \begin{cases} x = - k \\ y = k + 2 \\ z = k \end{cases}

(Observation: don’t make the mistake of using the same parameter for both lines. Each lines exist on its own, there’s no link between them, so there’s no reason why they should should be described by the same parameter. If this doesn’t seem convincing, get two lines you know to be intersecting, use the same parameter for both and try to find the intersection point.)

The directional vectors are:

V_{s} = (1, 0, -1), V_{r} = (- 1, 1, 1)

So they clearly aren’t parallel. They aren’t incidental as well, because the only possible intersection point is for y = 0, but when y = 0, r is at (2, 0, -2), which doesn’t belong to s. It does indeed make sense to look for the line of shortest distance between the two, confident that we will find a non-zero result.

Continue reading “Quick method to find line of shortest distance for skew lines”