A matrix is a linear transformation between vector spaces. Often we want to think about linear maps from an *n*-dimensional vector space to itself. When the vector space that we're mapping to and from is the same, the matrix that represents the linear transformation is square, and has special properties. In general, a vector will change under the transformation, so good question to ask is whether any special vectors stay the same afterwards.

We call these 'eigenvectors' after the German, but all that means is that they're vectors that stay the same (up to a possible rescaling) under the transformation. This gives us intuition about the geometry of the transformation. Consider a 2d reflection, for instance: two special directions will be the same after reflection as to begin with - the axis of reflection (where you put the 'mirror) which will therefore be an eigenvector with eigenvalue one (it stays the same length under reflection), and the vector perpendicular to the mirror, which will be inverted to point in the opposite direction, and so have eigenvalue -1.

Some matrix transformations 'lose' information: if an eigenvector has eigenvalue 0, every point along a line in that direction is mapped to the same vector! Such a transformation cannot be invertible, because we can't tell how far along the line the original point was. So matrices with a zero eigenvalue are singular (non-invertible), and have determinant 0.

[Extension: We think about this concept, of invariance of a vector under a linear transformation, a lot in mathematics - you've probably learnt that d/dx ( e^{x} ) = e^{x} ; that's a function that stays the same after differentiation. Well, differentiation is a linear transformation on a vector space of functions, and the exponential function e^{x} is an eigen*function* with eigenvalue 1!]