Linear Algebra for Engineers

The following presentation builds on previous posts on linear algebra to establish the use of linear algebra in engineering. Building on the previous articles, this presentation includes a general summary of applications along with direct computation for solving engineering problems. The computational aspect covers general constructions and specific maths for simulation. The presentation also covers the meaning of linear algebra vocabulary in an engineering context.

Contributor: Joseph Brewster

Matrices in Calculus

Examples are a very important tool in understanding mathematics, including linear maps and matrices. Common examples of linear maps include differentiation and integration where c = 0 as maps between vector spaces of polynomials. Through experience with these operations, it is clear to see that each follows the properties of additivity and homogeneity. Let D represent differentiation and I represent integration, the properties mean that D(a+b) = D(a) + D(b) and cD(a) = D(ca) for a constant c.

For some degree polynomial, n, the derivative maps \(x^n\) to \(n*x^{n-1}\) and the integral maps \(x^n\) to \(\frac{x^{n+1}}{n+1}\). This way of expressing the operations draws the connection to the basis elements of the space of polynomials. Since we have already defined the maps on the basis elements, a matrix can be constructed for both operations. The general construction of these matrices will be the coefficients (1, 2, 3 … and 1, ½, ⅓…) along the diagonal to represent multiplying or dividing by the power shifted to represent the change in power.
Your text to link here…
These matrices, or at least the infinitely large varieties, can be applied to any polynomial to yield the derivative or integral. The way this would be done with matrix operations is by multiplying the matrix for the operation by the n-by-1 matrix representing the input function.

Since the integral and derivative are inverse functions of one another, the composition of the functions yields the original function, meaning that the product of the matrices representing integration and differentiation will be the identity matrix, as seen below.

This example is a good way to understand the notion of how linear maps are represented as matrices because of experience with derivatives and integrals already relying on the necessary properties of linear maps.

Another matrix in calculus is the total derivative of a function, which is a linear map from \(R^n\) to \(R^m\) based on a function between the two sets. The total derivative of a function is a set of linear maps between the basis elements of the domain and the rate of change of a particular element of the basis of the codomain in that direction. Your text to link here…

Under the condition that all the partial derivatives exist, the total derivative corresponds to the Jacobian matrix, which is used to transform spaces in vector calculus.

The exact transformation created by the Jacobian matrix is left as an important exercise to those studying vector calculus.

Contributor: Joseph Brewster

Applications of Linear Algebra

Matrices are everywhere - computer science, engineering, movies, and, of course, linear algebra.

Before we can understand why matrices are such a useful tool in many fields, it is important to understand what exactly a matrix can represent.

At the core level, a matrix is an array of elements with rows and columns. This idea of a matrix is a type of data structure used in computer science to store lists that have some form of overlap. For example, a matrix could represent a collection of students and their grades in a variety of classes. The rows could be classes, the columns students, and the intersection is the specific student’s grade in the specific class.

This introductory example leads nicely to the next idea of a matrix being a list of lists. Each student has a list of grades in all the classes, and our matrix becomes a list of each student’s grade list. This transitions to the use of matrices in linear algebra, the representation of linear maps.

For the purposes of this article, it is important to know linear maps are functions between vector spaces. A matrix can be formed for a linear map using the basis elements of both the domain and the codomain, with each column representing the coefficients of a linear combination of basis elements in the codomain mapped to by a basis element of the domain. For future reference, an m-by-n matrix, meaning m rows and n columns, the number of columns is the dimension of the domain, and the number of rows is the dimension of the codomain. For large vector spaces, these matrices provide an easier way to perform computations with matrix operations.

One of my personal favorite examples of the use of matrices is in engineering as part of a process called Finite Element Analysis, or FEA. For a bit of context, FEA is used to analyze designs in engineering by taking objects and breaking them into discrete vertices, lines, faces, etc. FEA can be used for physics simulations, so things as structural or thermal analysis. The true depth of FEA is beyond the scope of this article, but the relation to linear algebra illustrates the applications of these concepts. The matrices used in FEA look as follows Your text to link here… Your text to link here…

This example is for a bar defined by two points that can only stretch in one axis as more degrees of freedom or more points greatly complicate the matrix. The stiffness matrix is the matrix of a linear map with the domain being displacement vectors and the codomain of force vectors. The columns of the stiffness represent the components of the force vector that is mapped to by displacements along each axis. For structural analysis, this matrix will always be a square matrix because the dimensions of forces and displacements must be the same. All the maps between displacement and force vectors create a system of linear equations, which computers are rather proficient at computing.

Since this method defines displacement on points but uses the bars between the points for the equations, adding more points creates a larger matrix with the diagonal populated by overlapping 2x2 matrices such as the one shown. This is because some points are attached to bars on each side and thus are related to both when doing calculations. Your text to link here…

The finite element of analysis of the simple example could take a few minutes to calculate by hand depending on exactly what is being asked for, but computers can compute thousands of elements in seconds with more complex structures. The example, which is at its core just points and lines as discussed, shows the computational complexity of a process that seems conceptually understandable at the level presented. Your text to link here…

Much of the engineering analysis that keeps bridges and buildings standing is built on the foundations of linear algebra and matrices brought to an unfathomable scale. Matrices are something that may be familiar to many readers, but the value of the matrices comes from the underlying linear algebra.

Contributors: Joseph Brewster

The Connexion Between Rings, Boolean Algebra, & Logic

Classical logic is used (by some mathematicians) to establish the basis of mathematics — but sets of statements in classical logic are also special cases of a variety of mathematical objects. One such object we have studied recently is the notion of rings.

A ring is a set equipped with (and closed under) two binary operations. The first (often termed addition) is associative and commutative, having an identity, and an inverse for each element; the second (often termed multiplication) is associative and has an identity. (There is admittedly some disagreement over whether or not multiplicative identity is required — rings without it are sometimes termed rngs or pseudo-rings, while rings with it are sometimes termed unit rings.) These operations must furthermore satisfy the distributivity requirement, that for any elements \(a\), \(b\), \(c\) of the ring: \(a\times(b+c)=(a\times b)+(a\times c)\) and \((a+b)\times c=(a\times c)+(b\times c)\).

The real numbers with addition and multiplication are one of the simplest and most intuitive examples of a ring; the three-dimensional Euclidean vector space is a non-example, due to the non-associativity of the cross product. Beyond such concrete examples, however, rings also appear perhaps more unexpectedly in Boolean algebra and logic.

A Boolean ring is one in which \(a\times a=a\) for every element a of the ring, while a Boolean algebra is a complemented distributive lattice — but what exactly is that? Firstly, in order to understand the notion of a lattice, it is necessary to define a partial order relation: a binary relation \(≤\) such that \(a≤a\), \(a≤b \wedge b≤a \implies a=b\), and \(a≤b \wedge b≤c \implies a≤c\). A lattice is a set with a partial order relation in which every pair of elements has a least upper bound and greatest lower bound; a Boolean algebra adds the requirements that distributivity of intersection and union across each other hold, and that there exist \(a\) complement \(a'\) of every element \(a\) with elements \(0\) and \(1\), respectively known as zero and unit, such that \(a\times a'=0\) and \(a\times a'=1\).

It is most common to interpret the partial order relation of a Boolean algebra as inclusion between sets. Similarly viewing multiplication as intersection and addition as symmetric difference of sets, one can derive from any Boolean ring a Boolean algebra whose operations are union and intersection, with complementation defined by \(a'=a+1\). One can likewise derive a Boolean ring from any Boolean algebra.

This relationship allows us to identify a perhaps less intuitive example of a ring. Any Boolean algebra, with zero and unit corresponding to false and true, is a representation of sentential logic, if one interprets union as or, intersection as and, and complementation as negation; so any set of sentences in sentential logic can be expressed as a ring!

Contributor: Anh-Minh Tran

(Mathematically) Making a 2×2 Rubik’s Cube

Almost everyone has, at some point, played with the deceptively simple puzzle that is a Rubik’s cube. Even before studying pure maths, we had heard about the idea of Group Theory through an interest in Rubik’s Cubes. Rubik’s Cubes are a common example in group theory because of the physical illustration of the mathematical idea of a group. A group is defined as a set equipped with an operation with elements composed into other elements, identity and inverse elements, as well as being associative.Your text to link here… The elements of the group that defines a Rubik’s Cube are all the possible scrambles. The operations are generated by rotating faces of the cube, meaning algorithms that manipulate the cube in more specific ways can always be broken down into a collection of face turns. The identity of the Rubik’s is not moving the cube, which maintains the current scramble. The inverses are just rotating faces in opposite directions or composing those inverses as to undo a collection of moves; this can be thought of as undoing the moves of a scramble to solve the cube again. Finally, associativity can be thought of as the fact that any string of moves can be grouped together into collections of easy to do moves without affecting the algorithm, which is rather important for solving cubes quickly.

The reader may be wondering the exact mathematical construction of a Rubik’s Cube. For example, a clock is used to represent a group called the “cyclic group” because of the repeated cycles through the numbers. Similar to the jump in complexity of movement of the hand of a clock to the movement of pieces of a Rubik’s Cube, the group representing a Rubik’s Cube is too complex for the purpose of this article. For solving a Rubik’s Cube, it is not necessary to know the mechanical workings, but it can help gain a deeper understanding of the cube. The same concept will be applied to the group as the specific construction will not be necessary for following conclusions.Your text to link here…

From a strictly mechanical standpoint, a 2x2 Rubik’s Cube is the same as a 3x3. A 2x2 functions as a 3x3 with smaller edges hidden behind larger corners. The same idea applies to the groups that define each of the different cubes. A quotient group is a group that can be obtained through sending elements of a group to the quotient group through an equivalence relation. For the quotient group Q = G/H, Q is constructed by sending elements of G that differ only by elements of H to the same element in Q. If G were to be the group of a 3x3 and we wanted to make Q a 2x2, how do we construct H. Thinking back to the physical way this is done, we want an operation that “hides” the edge pieces and only gives us corner pieces. The necessary group turns out to be the group of algorithms that move and rotate edge pieces of the Rubik’s Cube. The reader should check for understanding of the following: if all of the possible edge cases are equivalent, the equivalence classes can be represented as different possible corner positions, which is just a 2x2.

The last step to confirm that the results are mathematically sound is to confirm that the group of edge positions and orientations is a normal subgroup of the 3x3. A subgroup means that the edge positions are a subset of the total scrambles and that the moves on a 3x3 are moves on the set of edge positions, which can be checked through an understanding of the Rubik’s Cube itself. To be a “normal” subgroup, the edges must follow the property that the combination of any moves on the 3x3, then moving only edges, and then the inverse of the first moves results in moving edges. Since edges and corners are independent of each other, the corners are unaffected by the middle move and thus scrambled and solved by the first and last move. Only edges can change during this process, meaning that the subgroup of edges is normal.

It has taken groups, quotient groups, equivalence relations, and normal subgroups all to mathematically represent that simply ignoring edges on a 3x3 Rubik’s Cube leaves a 2x2. Armed with this knowledge, the reader may consider what other quotient groups can be created on a 3x3 by making other collections of scrambles equivalent. For example, can the group of moves that only move and rotate corners be treated the same why we treated the group of moves that only move and rotate edges?

Contributors: Joseph Brewster, David Staudinger