alexandræ français english
mathematical findings
(extended) phonetic alphabet
miscellaneous
unicode
alexandræ · i finally understand diagonalisation lol

i finally understand diagonalisation lol

may 5th, 2023

    i know it may sound surprising, but i'd never really understood diagonalisation until now. it seemed more like a trick than something i could really get myself, like a lot of things in linear algebra. however, a couple realisations helped me understand this whole process. first off, i had to figure out, more-or-less on my own, that matrices are just a bunch of vectors in multiple rows, like (a1 ⋯ an), dot product ⟨,⟩ helps define linear maps and matrix multiplication · helps fetch vectors among matrices.

    for vectors, ⟨ax⟩ yields the a-coordinate of x, meaning any way to write x with respect to a has the form x = ⋯ + ⟨ax⟩ a + ⋯. an interesting corollary is that φa : x ↦ ⟨ax⟩ is a linear ℝn→ℝ map !,  and for which the coefficients for each xi is ai, so a helps encode the coefficients of the linear map. for example, 5x1+7x2−3x3 is just (5, 7, 3), x.
    for matrices, φA : x ↦ ⟨Ax⟩ = (A1x⟩, ⋯, ⟨Amx) helps define linear ℝn→ℝm maps, in a very similar way. we could also technically define ⟨AB⟩ = (AB1⟩ ⋯ ⟨ABk) to display k vectors in im(φA).

    when i say matrix multiplication helps fetch vectors among matrices, it's because A·ei, where ei = (δi1, ⋯, δik), is defined as Ai the ith column vector in the k×m matrix A, and then we kinda force in distributivity over addition like A·x = x1(A·e1) + ⋯ + xk(A·ek), as well, akin to scalar product, A·B = (A·B1 ⋯ A·B). the observation that starts kinda motivates diagonalisation and the study of dual spaces is that ⟨AB⟩ = A·B, where A is the transpose of A. the implication is that we could study linear forms by fetching a bunch of vectors, akin to interpolation in the case of polynomials.

    just like polynomials, we'd want to have them written in a way which explicates points of interest. for example, we can write dth degree polynomials as P : x ↦ λ (xz0) ⋯ (xzd) with each solution zi to P(zi) = 0, with λ the leading coefficient. the square-matrix-wise equivalents to these roots are eigenvectors, which are vectors x such that A·x is colinear to x (there are usually infinitely many, so we choose n non-colinear eigenvectors for an n×n matrix A), and leading coefficients become the associated eigenvalues (which are by how much the eigenvectors are stretched, in the same order), and the equivalent writing form is given by... diagonalisation !

    indeed, once we get the eigenvectors and their associated eigenvalues, A·(v1 ⋯ vn) is, by definition, (λ1v1 ⋯ λnvn). however, we also have from the defining distributivity property that (v1 ⋯ vn)·λiei = λivi, so combining the two main defining properties together, we get (v1 ⋯ vn)·(λ1e1 ⋯ λnen) = (λnvn ⋯ λnvn)... sooo, the same as A·(v1 ⋯ vn). we've just shown that A·(v1 ⋯ vn) = (v1 ⋯ vn)·(λ1e1 ⋯ λnen), which implies that A = (v1 ⋯ vn)·(λ1e1 ⋯ λnen)·(v1 ⋯ vn)-1 whenever (v1 ⋯ vn) is invertible. obviously, many cool things arise from this, notably because diagonal matrices are really easy to handle overall : its powers are the powers of its diagonal's components, its determinant is their conjoined product, its inverse is their componentwise inverses, etc. it's just as helpful as, well, writing a polynomial as P : x ↦ λ (xz0) ⋯ (xzd), as well as explicating values interest (roots and leading coeffs for polynomials, eigenvectors and eigenvalues for matrices).
Lp spaces measure "how close" to be...
talking about function and set grap...
alexandræ
(ɔ) 2023 – 2024, intellectual property is a scam