Technology & Engineering
Basic Matrix Operations
Basic matrix operations are fundamental mathematical operations that involve matrices. These operations include addition, subtraction, multiplication, and division of matrices. These operations are essential in various fields, including engineering, physics, and computer science.
Written by Perlego with AI-assistance
Related key terms
1 of 5
10 Key excerpts on "Basic Matrix Operations"
- eBook - PDF
- C Y Hsiung, G Y Mao(Authors)
- 1998(Publication Date)
- World Scientific(Publisher)
CHAPTER 3 MATRIX OPERATIONS The concept of matrices was discussed in the previous chapter. It is not only an important tool for treating a system of linear equations, but also an indispensable tool for studying linear functions. In the following chapters we shall often make use of it. The advantage of matrix operations is that when a matrix is regarded as a quantity, it makes operations on arrays of ordinary numbers extremely simple. In this chapter we shall discuss matrix operations, particularly as regards the following three aspects. 1. Matrix addition, matrix subtraction, scalar multiplication, matrix multipli-cation, etc., as well as the basic properties of these matrix operations. 2. Some specially important matrices. 3. The necessary and sufficient condition for a matrix to be invertible and methods of finding an inverse matrix. This chapter is divided into three parts in which we in turn consider the above three aspects. 3.1. Matrix Addition and Matrix Multiplication In Sec. 2.1 the definition of a matrix was given. The matrix A of order n is called a nonsingular matrix if its determinant A ^ 0. Otherwise, i.e., A = 0, A is called a singular matrix. When all elements are real numbers, A is called a real matrix. As in the case of equality, addition, scalar multiplication of vectors, we have the following definitions. 92 - eBook - PDF
- Stefan Waner, Steven Costenoble(Authors)
- 2017(Publication Date)
- Cengage Learning EMEA(Publisher)
All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-300 254 Chapter 4 Matrix Algebra and Applications Introduction We used matrices in Chapter 3 simply to organize our work. It is time we examined them as interesting objects in their own right. There is much that we can do with matrices besides row operations: We can add, subtract, multiply, and even, in a sense, “divide” matrices. We use these operations to study game theory and input-output models in this chapter, and Markov chains in a later chapter. Many calculators, spreadsheets, and other computer programs can do these matrix operations, which is a big help in doing calculations. However, we need to know how these operations are defined to see why they are useful and to understand which to use in any particular application. 4.1 Matrix Addition and Scalar Multiplication Matrices Let’s start by formally defining what a matrix is and introducing some basic terms. Matrix, Dimension, and Entries An m 3 n matrix A is a rectangular array of real numbers with m rows and n columns. We refer to m and n as the dimensions of the matrix. The numbers that appear in the matrix are called its entries . We customarily use capital letters A , B , C , c for the names of matrices. Quick Examples 1. A 5 B 2 0 1 33 2 22 0 R is a 2 3 3 matrix because it has two rows and three columns. 2. B 5 D 2 3 10 44 2 1 3 8 3 T is a 4 3 2 matrix because it has four rows and two columns. ✱ The entries of A are 2, 0, 1, 33, 2 22 , and 0. The entries of B are the numbers 2, 3, 10, 44, 2 1 , 3, 8, and 3. ✱ Remember that the number of rows is given first and the num-ber of columns second. An easy way to remember this is to think of the acronym “RC” for “Row then Column.” Referring to the Entries of a Matrix There is a systematic way of referring to particular entries in a matrix. If i and j are numbers, then the entry in the i th row and j th column of the matrix A is called the ij th entry of A . - eBook - PDF
- Al Cuoco, Kevin Waterman, Bowen Kerins, Elena Kaczorowski(Authors)
- 2019(Publication Date)
- American Mathematical Society(Publisher)
C H A P T E R 4 Matrix Algebra In the last chapter, you used matrices as a bookkeeping organizer: the matrix kept track of the coefficients in a linear system. By operating on the matrix in certain ways (using the elementary row operations), you transformed the matrix without changing the solution set of the underlying linear system. But matrices can be objects in their own right. They have their own algebra, their own basic rules, and their own operations beyond the elementary row operations. If you think of a vector as a matrix, you can start extending vector operations, such as addition and scalar multiplication, to matrices of any dimension. This chapter defines three operations—addition, scaling, and multiplication—and develops an algebra of matrices that allows you to perform complicated calculations and solve seemingly difficult problems very efficiently. So, think of this chapter as an expansion of your algebra toolbox. By the end of this chapter, you will be able to answer questions like these: 1. When can you multiply two matrices? 2. How can you tell if a matrix equation has a unique solution? 3. Let A = ⎛ ⎝ 1 4 3 − 1 1 2 5 4 2 ⎞ ⎠ . What is A − 1 . You will build good habits and skills for ways to • look for similarity in structure • reason about calculations • create a process • seek general results • look for connections 149 Chapter 4 Matrix Algebra Vocabulary and Notation • A ij , A ∗ j , A i ∗ • diagonal matrix • entry • equal matrices • identity matrix • inverse • invertible matrix, nonsingular matrix • kernel • lower triangular matrix • m × n matrix • matrix multiplication, matrix product • multiplication by a scalar • scalar matrix • singular matrix • skew-symmetric matrix • square matrix • sum of matrices • symmetric matrix • transpose • upper triangular matrix 150 4.1 Getting Started 4.1 Getting Started A matrix is a rectangular array of numbers. Here are some matrices: ←− You’ve seen matrices before. - Richard A. Brualdi, Dragos Cvetkovic(Authors)
- 2008(Publication Date)
- Chapman and Hall/CRC(Publisher)
Chapter 2 Basic Matrix Operations In this chapter we introduce matrices as arrays of numbers and define their basic algebraic operations: sum, product, and trans-position. Next, we associate to a matrix a digraph called the K¨onig digraph and establish connections of matrix operations with cer-tain operations on graphs. These graph-theoretic operations il-luminate the matrix operations and aid in understanding their properties. In particular, we use the K¨onig digraph to explain how matrices can be partitioned into blocks in order to facilitate matrix operations. 2.1 Basic Concepts Let m and n be positive integers. 1 A matrix is an m by n rectan-gular array of numbers 2 of the form A = a 11 a 12 · · · a mn a 21 a 22 · · · a 2 n . . . . . . . . . . . . a m 1 a m 2 · · · a mn . (2.1) 1 There will be occasions later when we will want to allow m and n to be 0, resulting in empty matrices in the definition. 2 These may be real numbers, complex numbers, or numbers from some other arithmetic system, such as the integers modulo n . 27 28 CHAPTER 2. Basic Matrix Operations The matrix A has size m by n and we often say that its type is m × n . The mn numbers a ij are called the entries or ( elements ) of the matrix A . If m = n , then A is a square matrix , and instead of saying A has size n by n we usually say that A is a square matrix of order n . The matrix A in (2.1) has m rows of the form α i = bracketleftBig a i 1 a i 2 · · · a in bracketrightBig , ( i = 1 , 2 , . . . , m ) and n columns β j = a 1 j a 2 j . . . a mj , ( j = 1 , 2 , . . . , n ) . The entry a ij contained in both α i and β j , that is, the entry at the intersection of row i and column j , is the ( i, j ) -entry of A . The rows α i are 1 by n matrices, or row vectors ; the columns β j are m by 1 matrices, or column vectors . For brevity we denote the m by n matrix A by A = [ a ij ] m,n and usually more simply as [ a ij ] if the size is understood.- Sudipto Banerjee, Anindya Roy(Authors)
- 2014(Publication Date)
- Chapman and Hall/CRC(Publisher)
CHAPTER 1 Matrices, Vectors and Their Operations Linear algebra usually starts with the analysis and solutions for systems of linear equations such as a 11 x 1 + a 12 x 2 + . . . + a 1 n x n = b 1 , a 21 x 1 + a 22 x 2 + . . . + a 2 n x n = b 2 , . . . a m 1 x 1 + a m 2 x 2 + . . . + a mn x n = b m . Such systems are of fundamental importance because they arise in diverse mathemat-ical and scientific disciplines. The a ij ’s and b i ’s are usually known from the manner in which these equations arise. The x 0 i s are unknowns that satisfy the above set of equations and need to be found. The solution to such a system, depends on the a ij ’s and b i ’s. They contain all the information we need about the system. It is, therefore, natural to store these numbers in an array and develop mathematical operations for these arrays that will lead us to the x i ’s. Example 1.1 Consider the following system of three equations in four unknowns: 4 x 1 + 7 x 2 + 2 x 3 = 2 , -6 x 1 -10 x 2 + x 4 = 1 , 4 x 1 + 6 x 2 + 4 x 3 + 5 x 4 = 0 . (1.1) All the information contained in the above system can be stored in a rectangular array with three rows and four columns containing the coefficients of the unknowns and another single column comprising the entries in the right-hand side of the equation. Thus, 4 7 2 0 -6 -10 0 1 4 6 4 5 and 2 1 0 are the two arrays that represent the linear system. We use two different arrays to distinguish between the coefficients on the left-hand side and the right-hand side. Alternatively, one could create one augmented array 4 7 2 0 2 -6 -10 0 1 1 4 6 4 5 0 1 2 MATRICES, VECTORS AND THEIR OPERATIONS with a “ | ” to distinguish the right-hand side of the linear system. We will return to solving linear equations using matrices in Chapter 2 . More gener-ally, rectangular arrays are often used as data structures to store information in com-puters.- Ron Larson(Author)
- 2017(Publication Date)
- Cengage Learning EMEA(Publisher)
Matrices Clockwise from top left, Cousin_Avi/Shutterstock.com; Goncharuk/Shutterstock.com; Gunnar Pippel/Shutterstock.com; Andresr/Shutterstock.com; nostal6ie/Shutterstock.com 2.1 Operations with Matrices 2.2 Properties of Matrix Operations 2.3 The Inverse of a Matrix 2.4 Elementary Matrices 2.5 Markov Chains 2.6 More Applications of Matrix Operations Information Retrieval (p. 58) Flight Crew Scheduling (p. 47) Beam Deflection (p. 64) Computational Fluid Dynamics (p. 79) Data Encryption (p. 94) 2 39 Copyright 2018 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-300 40 Chapter 2 Matrices 2.1 Operations with Matrices Determine whether two matrices are equal. Add and subtract matrices and multiply a matrix by a scalar. Multiply two matrices. Use matrices to solve a system of linear equations. Partition a matrix and write a linear combination of column vectors. EQUALITY OF MATRICES In Section 1.2, you used matrices to solve systems of linear equations. This chapter introduces some fundamentals of matrix theory and further applications of matrices. It is standard mathematical convention to represent matrices in any one of the three ways listed below. 1. An uppercase letter such as A , B , or C 2. A representative element enclosed in brackets, such as [ a ij ] , [ b ij ] , or [ c ij ] 3. A rectangular array of numbers bracketleft.alt2 a 11 a 21 uni22EE.alt2 a m 1 a 12 a 22 uni22EE.alt2 a m 2 . . . . . . . . . a 1 n a 2 n uni22EE.alt2 a mn bracketright.alt2 As mentioned in Chapter 1, the matrices in this text are primarily real matrices . That is, their entries are real numbers. Two matrices are equal when their corresponding entries are equal. Definition of Equality of Matrices Two matrices A = [ a ij ] and B = [ b ij ] are equal when they have the same size ( m × n ) and a ij = b ij for 1 ≤ i ≤ m and 1 ≤ j ≤ n .- Paul E. Green(Author)
- 2014(Publication Date)
- Academic Press(Publisher)
40 2. VECTOR AND MATRIX OPERATIONS FOR MULTIVARIATE ANALYSIS 2.4 MATRIX REPRESENTATION As in our introduction to vector arithmetic, our purpose here is to describe various operations involving matrices as they relate to subsequent discussion of multivariate procedures. Again, we attempt no definitive treatment of the topic but, rather, select those aspects of particular relevance to subsequent chapters. We first present a discussion of elementary relations and operations on matrices and then turn to a description of special types of matrices. More advanced topics in matrix algebra are relegated to subsequent chapters and the appendixes. 2.5 BASIC DEFINITIONS AND OPERATIONS ON MATRICES A matrix A of order m by n, and with general entry (aij), consists of a rectangular array of real numbers (scalars) arranged in m rows and n columns. v mxn 011 021 012 0 2 2 an a i2 a„ &2n yflij )mxn L 4 X 5 l m u ml · · · a m j . . For example, a 4 x 5 matrix would be explicitly written, in brackets, as 011 012 013 014 015 021 022 023 024 025 031 032 033 034 035 041 042 043 044 fl 45 where i = 1, 2, 3,4 and / = 1, 2, 3,4, 5. As is the case for vectors, matrices will appear in boldfaced type, such as A, B, C, etc. A matrix can exhibit any relation between m, the number of rows, and n, the number of columns. For example, if either m > n or n > m, we have a rectangular matrix. (The former is often called a vertical matrix, while the latter is often called a horizontal matrix.) If m = «, the matrix is called square. To illustrate, bn b ì2 b ì3 ~ b 2 b 2 2 ^23 B 3 x 3 '31 '32 '33 2.5. BASIC DEFINITIONS AND OPERATIONS ON MATRICES 41 The set of elements on the diagonal, from upper left to lower right, is called the main or principal diagonal of the square matrix B. Square matrices occur quite frequently as derived matrices in multivariate analysis. For example, a correlation matrix, to be described later in the chapter, is a square matrix.- eBook - PDF
Mathematical Methods for Physicists
A Concise Introduction
- Tai L. Chow(Author)
- 2000(Publication Date)
- Cambridge University Press(Publisher)
3 Matrix algebra As vector methods have become standard tools for physicists, so too matrix methods are becoming very useful tools in sciences and engineering. Matrices occur in physics in at least two ways: in handling the eigenvalue problems in classical and quantum mechanics, and in the solutions of systems of linear equa-tions. In this chapter, we introduce matrices and related concepts, and define some basic matrix algebra. In Chapter 5 we will discuss various operations with matrices in dealing with transformations of vectors in vector spaces and the operation of linear operators on vector spaces. Definition of a matrix A matrix consists of a rectangular block or ordered array of numbers that obeys prescribed rules of addition and multiplication. The numbers may be real or complex. The array is usually enclosed within curved brackets. Thus 1 2 4 2 1 7 is a matrix consisting of 2 rows and 3 columns, and it is called a 2 3 (2 by 3) matrix. An m n matrix consists of m rows and n columns, which is usually expressed in a double sux notation: ~ A a 11 a 12 a 13 a 1 n a 21 a 22 a 23 . . . a 2 n . . . . . . . . . . . . a m 1 a m 2 a m 3 . . . a mn 0 B B B B @ 1 C C C C A : 3 : 1 Each number a i j is called an element of the matrix, where the first subscript i denotes the row, while the second subscript j indicates the column. Thus, a 23 100 refers to the element in the second row and third column. The element a i j should be distinguished from the element a ji . It should be pointed out that a matrix has no single numerical value; therefore it must be carefully distinguished from a determinant. We will denote a matrix by a letter with a tilde over it, such as ~ A in (3.1). - eBook - PDF
- Peter Dale(Author)
- 2014(Publication Date)
- CRC Press(Publisher)
129 7 Matrices and Determinants 7.1 Basic Matrix Operations Matrices are a mathematical form of shorthand. At its simplest level, a matrix is a set of numbers aligned in rows and columns in the form of a rectangle and then enclosed in brackets. Each number within the matrix is an element and the whole set of numbers may be referred to as an array . Rather than talk about each of the elements within the array, we can simply refer to the matrix as a whole and call it M . For example, = 1 2 3 4 5 6 M is a matrix with the first six integers arranged with two rows and three columns and is called a 2 * 3 matrix . Think of M as six boxes or cells, each containing a number, rather like a spreadsheet: 1 2 3 4 5 6 Likewise, we may have a single column of cells with boxes stacked vertically, as in 7 8 9 This is a column matrix with three rows and one column or 3 * 1. If the number of rows equals the number of columns then the matrix is said to be square. Thus, − − 1 0 0 0 2 0 0 0 3 130 Mathematical Techniques in GIS is a 3 * 3 square matrix. It has nine cells and since in this example the only numbers other than zero lie along the diagonal, it is called a diagonal matrix . The diagonal from top left to bottom right is called the leading diagonal . If all the numbers in the leading diagonal are 1 and all the other elements are zero, then the matrix is called an identity matrix . Thus, 1 0 0 1 and 1 0 0 0 1 0 0 0 1 and 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 are all identity matrices and are often written as I . The way that matrices are manipulated follows certain rules that are ideally suited to handling in a computer as the operations are in general repetitive. Two matrices can be added or subtracted if they are the same size. This is done by adding or sub-tracting the corresponding elements (Box 7.1). - Available until 4 Dec |Learn more
Exploring Linear Algebra
Labs and Projects with Mathematica ®
- Crista Arangala(Author)
- 2014(Publication Date)
- Chapman and Hall/CRC(Publisher)
Matrix Operations 5 Operations on Matrices Adding Two Matrices To add two matrices together, Type : The Name of the Matrix1 + The Name of the Matrix2 Exercises: a. Find the sum A + B . You should get an error, explain why you think an error occurred. b. Define matrix M = 4 5 1 − 1 3 2 . Find A + M and M + A . Is addition of matrices commutative? c. Explain the process of matrix addition. What are the dimensions of the sum matrix. How would you take the difference of two matrices? Scalar Multiplication To multiply a matrix by a constant c, Type : c The Name of the Matrix Exercise: Multiply matrix A by the scalar 4. Is multiplication of a scalar from the left the same as multiplication of a scalar from the right? (i.e., does 4 A = A 4?) Multiplying Two Matrices To multiply two matrices together, Type: The Name of the Matrix1. The Name of the Matrix2 Be very careful here, A*B does not produce the correct matrix, you must use . to symbolize multiplication. Exercises: a. Multiply matrix A on the right by matrix B . b. Go to http://demonstrations.wolfram.com/MatrixMultiplication/ and try some examples of matrix multiplication. Then describe the multiplication process. c. Multiply matrix A on the left by matrix B. Was your description of the multiplication process correct? What are the dimensions of this matrix? d. Multiply matrix A on the right by matrix M. You should get an error, explain why an error occurred. 6 Exploring Linear Algebra e. Is matrix multiplication commutative? What has to be true about the dimen-sions of two matrices in order to multiply them together? The Transpose and Trace of a Matrix The transpose of a matrix, A is denoted A T . To take the transpose of a matrix, Type : Transpose[The Name of the Matrix] Exercises: a. Take the transpose of matrix A and describe the transpose operation. b. What are the dimensions of the matrix A T ? c. What is ( A T ) T ? d. Calculate ( A + M ) T . Does this equal A T + M T ? e.
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.









