The introduction and development of the notion of a matrix and the subject of linear algebra followed the development of determinants, which arose from the study of coefficients of systems of linear equations. Leibnitz, one of the founder of calculus, used determinant in 1963 and Cramer presented his determinant based formula for solving systems of linear equation (today known as Cramer’s rule) in 1750.
The first implicit use of matrices occurred in Lagrange’s work on bilinear form in late 1700. Lagrange desired to characterize the maxima and minima of multi-variant functions. His method is now known as the method of Lagrange multipliers. In order to do this he first required the first order partial derivation to be 0 and additionally required that a condition on the matrix of second order partial derivatives holds; this condition is today called positive or negative definiteness, although Lagrange did not use matrices explicitly.
Gauss developed elimination around 1800 and used it to solve least square problem in celestial computations and later in computations to measure the earth and it’s surface (the branch of applied mathematics concerned with measuring or determining the shape of the earth or with locating exactly points on the earth’s surface is called Geodesy). Even though Gauss name is associated with this technique eliminating variable from system of linear equations there were earlier work on this subject.
Chinese manuscripts from several centuries earlier have been found that explains how to solve a system of three equations in three unknown by “Guassian” elimination. For years Gaussian elimination was considered part of the development of geodgesy, not mathematics. The first appearance of Gaussian-Jordan elimination in print was in a handbook on geodesy written by Wihelm Jordan. Many people incorrectly assume that the famous mathematician, Camille Jordan is the Jordan in “Gauss-Jordan elimination”.
For matrix algebra to fruitfully develop one needed both proper notation and proper definition of matrix multiplication. Both needs were met at about the same time in the same place. In 1848 in England, J.J Sylvester first introduced the term “matrix”, which was the Latin word for “womb” as a name for an array of numbers.
Matrix algebra was nurtured by the work of Arthur Cayley in 1855. Cayley studied multiplication so that the matrix of coefficient for the composite transformation ST is the product of the matrix S times the matrix T. He went on to study the algebra of these composition including matrix inverses. The famous Cayley-Hamilton theorem which asserts that a square matrix is a root of it’s characteristics’ polynomial was given by Cayley in his 1858 memoir on the theory of matrices. The use of single letter “A to represent a matrix was crucial to the development of matrix algebra. Early in the development, the formular det(AB) = det (A) det(B) provided a connection between matrix algebra and determinants. Cayley wrote “There would be many things to say about this theory of matrices which should, it seems to me, precede the theory of determinants”.
Mathematicians also attempted to develop for algebra of vectors but there was no natural definition of the product of two vectors that held in arbitrary dimensions. The first vector algebra that involved a non- commutative vector product (that is V x W need not equal W x V) was proposed by Hermann Grassman in his book – Ausedehnungslehre (1844). Grossmann’s text also introduced the product of a column matrix and a row matrix, which resulted in what is now called a simple or a rank one matrix. In the late 19th century the American mathematical physicist, Willard Gibbs published his famous treatise on vector analysis. In that treatise Gibbs represented general matrices, which he called dyadics as sum of simple matrices, which Gibbs called dyads. Later the physicist, P.A.M. Dirac introduced the term “bracket” for what we now call the “scalar product” of a “bar” (row) vector times a “ket” (column) vector and the term “ket-bra” for the product of a ket times a bra, resulting in what we now call a simple matrix, as above. Our convention of identifying column matrices and vector was introduced by physicists in the 20th century.
Matrices continued to be closely associated with linear transformations. By 1900, they were just a finite dimensional sub case of the emerging theory of linear transformations. The modern definition of a vector space was introduced by Peano in 1888. Abstract vector space whose elements were function soon followed. There was renewed interests in matrices, particularly on the numerical analysis of matrices, John Von Neumann and Herman Goldstein introduced condition numbers in analyzing round – off errors in 1947. Alan Turing and Von Neumann, the 20th century giants in the development of stored – program computers. Turning introduced the LU decomposition of a matrix in 1948. The L is a lower triangular matrix with I’s on the diagonal and the U is an echelon matrix. It is common to use LU decomposition in the solution of n sequence of systems of linear equations, each having the same co-efficient matrix. The benefit of the QR decomposition was realized a decade later. The Q is a matrix whose column are orthogonal vector and R is a square upper triangular invertible matrix with positive entities on its diagonal.
The QR factorization is used in computer algorithms for various computations, such as solving equations and find eigenvalues. Frobenius in 1878 wrote an important work on matrices on linear substitutions and bilinear forms, although he seemed unaware of Cayley’s work. However be proved important results in canonical matrices as representatives of equivalences classes of matrices. He cites Kronecker and Weiserstrases having considered special cases of his results in 1868 and 1874 respectively.
Frobenius also proved the general result that a matrix satisfies it’s characteristic equation. This 1878 paper by Frobenius also contains the definition of the rank of a matrix, which he used in his work on canonical forms and the definition of orthogonal matrices.
The first implicit use of matrices occurred in Lagrange’s work on bilinear form in late 1700. Lagrange desired to characterize the maxima and minima of multi-variant functions. His method is now known as the method of Lagrange multipliers. In order to do this he first required the first order partial derivation to be 0 and additionally required that a condition on the matrix of second order partial derivatives holds; this condition is today called positive or negative definiteness, although Lagrange did not use matrices explicitly.
Gauss developed elimination around 1800 and used it to solve least square problem in celestial computations and later in computations to measure the earth and it’s surface (the branch of applied mathematics concerned with measuring or determining the shape of the earth or with locating exactly points on the earth’s surface is called Geodesy). Even though Gauss name is associated with this technique eliminating variable from system of linear equations there were earlier work on this subject.
Chinese manuscripts from several centuries earlier have been found that explains how to solve a system of three equations in three unknown by “Guassian” elimination. For years Gaussian elimination was considered part of the development of geodgesy, not mathematics. The first appearance of Gaussian-Jordan elimination in print was in a handbook on geodesy written by Wihelm Jordan. Many people incorrectly assume that the famous mathematician, Camille Jordan is the Jordan in “Gauss-Jordan elimination”.
For matrix algebra to fruitfully develop one needed both proper notation and proper definition of matrix multiplication. Both needs were met at about the same time in the same place. In 1848 in England, J.J Sylvester first introduced the term “matrix”, which was the Latin word for “womb” as a name for an array of numbers.
Matrix algebra was nurtured by the work of Arthur Cayley in 1855. Cayley studied multiplication so that the matrix of coefficient for the composite transformation ST is the product of the matrix S times the matrix T. He went on to study the algebra of these composition including matrix inverses. The famous Cayley-Hamilton theorem which asserts that a square matrix is a root of it’s characteristics’ polynomial was given by Cayley in his 1858 memoir on the theory of matrices. The use of single letter “A to represent a matrix was crucial to the development of matrix algebra. Early in the development, the formular det(AB) = det (A) det(B) provided a connection between matrix algebra and determinants. Cayley wrote “There would be many things to say about this theory of matrices which should, it seems to me, precede the theory of determinants”.
Mathematicians also attempted to develop for algebra of vectors but there was no natural definition of the product of two vectors that held in arbitrary dimensions. The first vector algebra that involved a non- commutative vector product (that is V x W need not equal W x V) was proposed by Hermann Grassman in his book – Ausedehnungslehre (1844). Grossmann’s text also introduced the product of a column matrix and a row matrix, which resulted in what is now called a simple or a rank one matrix. In the late 19th century the American mathematical physicist, Willard Gibbs published his famous treatise on vector analysis. In that treatise Gibbs represented general matrices, which he called dyadics as sum of simple matrices, which Gibbs called dyads. Later the physicist, P.A.M. Dirac introduced the term “bracket” for what we now call the “scalar product” of a “bar” (row) vector times a “ket” (column) vector and the term “ket-bra” for the product of a ket times a bra, resulting in what we now call a simple matrix, as above. Our convention of identifying column matrices and vector was introduced by physicists in the 20th century.
Matrices continued to be closely associated with linear transformations. By 1900, they were just a finite dimensional sub case of the emerging theory of linear transformations. The modern definition of a vector space was introduced by Peano in 1888. Abstract vector space whose elements were function soon followed. There was renewed interests in matrices, particularly on the numerical analysis of matrices, John Von Neumann and Herman Goldstein introduced condition numbers in analyzing round – off errors in 1947. Alan Turing and Von Neumann, the 20th century giants in the development of stored – program computers. Turning introduced the LU decomposition of a matrix in 1948. The L is a lower triangular matrix with I’s on the diagonal and the U is an echelon matrix. It is common to use LU decomposition in the solution of n sequence of systems of linear equations, each having the same co-efficient matrix. The benefit of the QR decomposition was realized a decade later. The Q is a matrix whose column are orthogonal vector and R is a square upper triangular invertible matrix with positive entities on its diagonal.
The QR factorization is used in computer algorithms for various computations, such as solving equations and find eigenvalues. Frobenius in 1878 wrote an important work on matrices on linear substitutions and bilinear forms, although he seemed unaware of Cayley’s work. However be proved important results in canonical matrices as representatives of equivalences classes of matrices. He cites Kronecker and Weiserstrases having considered special cases of his results in 1868 and 1874 respectively.
Frobenius also proved the general result that a matrix satisfies it’s characteristic equation. This 1878 paper by Frobenius also contains the definition of the rank of a matrix, which he used in his work on canonical forms and the definition of orthogonal matrices.
An axiomatic definition of a determinant was used by Weierstrass in his lectures and after his death, it was published in 1903 in the note on determinant theory. In the same year Kronecker’s lectures on determinants were also published after his death. With these two publications, the modern theory of determinants was in place but matrix theory took slightly longer to become a fully accepted theory. An important early text which brought matrices into their proper place within mathematics was introduction to higher algebra by Bocher in 1907. Turnbull and Aitken wrote influential text in the 1930s and Missky’s; “An introduction to linear algebra” in 1955 saw matrix theory to reach its present major role as one of the most important undergraduate mathematics topic.
For more Mathematics & Statistics Projects click here
================================================================
Item Type: Project Material | Attribute: 75 pages | Chapters: 1-5
Format: MS Word | Price: N3,000 | Delivery: Within 2hrs
================================================================
No comments:
Post a Comment