Command and Function Quick Reference

Brian H. Hahn , Daniel T. Valentine , in Essential Matlab for Engineers and Scientists (Fifth Edition), 2013

B.4 Matrices and matrix manipulation

B.4.1 Elementary matrices

eye Identity matrix
linspace Vector with linearly spaced elements
ones Matrix of ones
rand Uniformly distributed random numbers and arrays
randn Normally distributed random numbers and arrays
zeros Matrix of zeros
: (colon) Vector with regularly spaced elements

B.4.2 Special variables and constants

ans Most recent answer
eps Floating point relative accuracy
i or j - 1
Inf Infinity
NaN Not-a-Number
nargin, nargout Number of actual function arguments
pi 3.14159   26535   897 …
realmax Largest positive floating point number
realmin Smallest positive floating point number
varargin, varargout Pass or return variable numbers of arguments

B.4.3 Time and date

calendar Calendar
clock Wall clock (complete date and time)
date You'd never guess
etime Elapsed time
tic, toc Stopwatch
weekday Day of the week

B.4.4 Matrix manipulation

cat Concatenate arrays
diag Create or extract diagonal
fliplr Flip in left/right direction
flipud Flip in up/down direction
repmat Replicate and tile an array
reshape Change shape
rot90 Rotate 90°
tril Extract lower tridiagonal part
triu Extract upper tridiagonal part

B.4.5 Specialized matrices

gallery Test matrices
hilb Hilbert matrix
magic Magic square
pascal Pascal matrix
wilkinson Wilkinson's eigenvalue test matrix

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123943989000253

Matrices and Matrix Operations

Alexander S. Poznyak , in Advanced Mathematical Tools for Automatic Control Engineers: Deterministic Techniques, Volume 1, 2008

Definition 2.6

For a matrix A m × n the size

of the identity matrix in the canonical form for A is referred to as the rank of A, written r = rank A. If A = Om×n then rank A = 0, otherwise rank A ≥ 1.

For each four canonical forms in (2.18) we have

rank I n × n = n  for m = n rank [ I m × m O m × ( n m ) ] = m  for m < n rank [ I n × n O ( m n ) × n ] = n  for m > n [ I r × r O r × ( n r ) O ( n r ) × r O ( n r ) × ( n r ) ] = r  for r min ( m , n )

Proposition 2.10

For a square matrix A n × n rank A = n if and only if it is nonsingular.

Proof

It follows straightforwardly from proposition (2.9).

Corollary 2.4

The rank of a matrix A m × n is equal to the order of its largest nonzero minor.

Several important properties of rank are listed below.

1.

(Frobenius inequality) If A, B and C are rectangular matrices and the product ABC is well defined, then

(2.19) rank ( A B ) + rank ( B C ) rank ( A ) + rank ( A B C )

(2.20) rank ( A B ) min { rank ( A ) , rank ( B ) }

Indeed, taking in (2.19) first A and C to be appropriate size we obtain (2.20).

2.

For any complex matrix A

rank ( A ) = rank ( A A ) = rank ( A A ) undefined

3.

If P and Q are nonsingular and A is square, then

(2.21) rank ( P A Q ) = rank ( A )

Indeed, by (2.20) it follows

rank ( P A Q ) min { rank ( P ) ,  rank ( A Q ) } = min { n ,  rank ( A Q ) } = rank ( A Q ) undefined min { rank ( A ) , rank ( Q ) } = rank ( A ) = rank ( P 1 [ P A Q ] Q 1 ) undefined min { rank ( P 1 ) , rank ( [ P A Q ] Q 1 ) } = rank ( [ P A Q ] Q 1 ) undefined min { rank ( P A Q ) , rank ( Q 1 ) } = rank ( P A Q )

4.

(2.22) rank ( A ) = rank ( A T ) = rank ( A ) undefined

5.

For any A m × n

(2.23) rank ( A + B ) rank ( A ) + rank ( B ) undefined

6.

(Sylvester's rule) For any A m × n and B n × p

(2.24) rank ( A ) + rank ( B ) n rank ( A B ) min { rank ( A ) ,  rank ( B ) }

7.

For any A m × m and B n × m

(2.25) rank ( A B ) = ( rank A ) undefined ( rank B )

This follows from (2.11).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780080446745500055

TRANSFORMATIONS

In Applied Dimensional Analysis and Modeling (Second Edition), 2007

Corollary of Theorem 9-9.

If D is an identity matrix, and if the jth column of C contains all zeros except one "1" in the ith row, then the variables in the ith column of the B matrix and the jth column of the A matrix can be interchanged without altering the C matrix.

Proof.

If D is an identity matrix, then its ith row contains all zeros, except one "1" which is in its ith (= jth) column. Therefore the ith column of D is identical to the jth column of C, and thus their interchange, by Theorem 9-9, will not affect C. This proves the corollary.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123706201500150

Initial Value Problems

S.P. Venkateshan , Prasanna Swaminathan , in Computational Methods in Engineering, 2014

10.8.2 Second order implicit scheme

Consider a first order ODE whose solution is available as y = yn at t = tn . Now take a step h to define a point t n+   1 = tn + h. We consider a quadratic function of form

(10.84) p 2 t = a + b t t n + c t t n 2

We require that this quadratic satisfy the following three conditions viz. P 2(tn ) = yn , p 2 (1)(t n )   = f(t n ,y n ) and p 2 (1)(t n  +   1)   = f(t n  +   1,y n  +   1) We at once see that a = yn . Taking the first derivative with respect to t of the quadratic and setting t = tn we get b = f (tn , yn ), and by setting t = t n+   1 we get c = f t n + 1 y n + 1 f t n y n 2 h . Thus the desired quadratic is obtained as

(10.85) p 2 t = y n + t t n f t n y n + f t n + 1 y n + 1 f t n y n 2 h t t n 2

Hence we have

(10.86) p 2 1 t = f t n y n + f t n + 1 y n + 1 f t n y n h t t n

We integrate this with respect to t between t= tn and t= t n+   1 to get

(10.87) y n + 1 = y n + hf t n y n + h f t n + 1 y n + 1 f t n y n 2

(10.88) = y n + h f t n y n + f t n + 1 y n + 1 2

This scheme is second order accurate and is also implicit. We refer to this scheme as implicit trapezoidal scheme. It is the same as the AM2 scheme that has been presented earlier. The reader may note that this is also an implicit version of the Heun or the modified Euler or the RK2 method. This scheme is A-stable.

The above procedure, when applied to a second order ODE with constant coefficients, requires the simultaneous solution of two linear equations which may easily be accomplished by a method such as Cramer's rule. When solving a set of linear ODEs we may use any of the methods described in the chapter on linear equations. Consider a system of linear ODE.

(10.89) Y 1 = A t Y + b t

Solving the above ODE using trapezoidal rule we get

(10.90) Y n + 1 = Y n + h A t n Y n + A t n + 1 Y n + 1 2 + h b t n + b t n + 1 2

On further simplification we get

(10.91) I h 2 A t n + 1 Y n + 1 = I + h 2 A t n Y n + h b t n + b t n + 1 2

where I is identity matrix. A MATLAB program has been provided below to solve a system of linear ODEs using trapezoidal method.

Program 10.5. Solution of system of linear ODE using implicit trapezoidal method

Example 10.19

Obtain the solution of the second order ODE with constant coefficients of Example 10.18 by second order implicit method. Comment on the effect of step size on the solution.

Solution :

The given ODE is written down, as usual, as two first order ODEs. We identify the functions involved as

u 0 1 = u 1 u 1 1 = 25 u 1 u 0

With a time step of h, the algorithm given by Equation10.87 translates to the following two equations:

u 1 , n + 1 = u 1 , n + h 25 u 1 , n u 0 , n + 25 u 1 , n + 1 u 0 , n + 1 2 u 0 , n + 1 = u 0 , n + h u 1 , n + u 1 , n + 1 2

These may be rewritten as

h 2 u 0 , n + 1 + 1 + 25 h 2 u 1 , n + 1 = u 1 , n + h 25 u 1 , n u 0 , n 2 u 0 , n + 1 h 2 u 1 , n + 1 = u 0 , n + h u 1 , n 2

The above may be recast in the matrix form as

h 2 1 + 25 h 2 1 h 2 u 0 , n + 1 u 1 , n + 1 = u 1 , n + h ( 25 u 1 , n u 0 , n 2 u 0 , n + h u 1 , n 2

The determinant of the coefficient matrix is given by

Δ = 1 + 25 h 2 + h 2 4

Using Cramer's rule the solution is obtained as

u 0 , n + 1 = u 1 , n 29 4 h h 2 4 u 0 , n 1 + 25 h 2 h 2 4 1 + 25 h 2 + h 2 4 u 1 , n + 1 = u 0 , n h u 1 , n 1 25 h 2 h 2 4 1 + 25 h 2 + h 2 4

Before we look at the effect of h on the solution we show below the results obtained by taking a step size of h = 0.1. The solution starts with the initial values provided in the problem and is carried forward up to x = 1.

t u 0 u 1 yE ϵ y
0 1.00000 0.00000 1.00000 0.00E+00
0.1 0.99778 -0.04440 0.99747 3.10E-04
0.2 0.99359 -0.03932 0.99360 -8.66E-06
0.3 0.98964 -0.03970 0.98964 3.04E-06
0.4 0.98568 -0.03948 0.98568 -1.85E-07
0.5 0.98174 -0.03933 0.98174 6.08E-09
0.6 0.97782 -0.03918 0.97782 -3.38E-08
0.7 0.97391 -0.03902 0.97391 -3.62E-08
0.8 0.97001 -0.03886 0.97001 -4.16E-08
0.9 0.96614 -0.03871 0.96614 -4.66E-08
1 0.96227 -0.03855 0.96227 -5.16E-08

The numerical solution is compared with the exact solution (see Example 10.18). The biggest error occurs after the very first step and is equal to 3.10 × 10  4. At t = 0.4 the error has reduced to -1.85E-07. However, with the RK4 method and h = 0.1 the error at t = 0.4 was much larger and equal to -2.77E-04, even though RK4 is a fourth order accurate method. We also saw that h > 0.11 with RK4 leads to an unstable solution. However in the case of implicit RK2 which is only second order accurate the solution is much better. With h = 0.2 we still have an error of only −   2.94 × 10  4 at x = 0.4. It is clear that the implicit RK2 is performing much better than RK4 in this case of a stiff ODE.

In the following table we show the error in the numerical solution with respect to the exact at t = 1 using different step sizes.

h t y yE ϵy
0.05 1 0.96227 0.96227 -1.29E-08
0.1 1 0.96227 0.96227 -5.16E-08
0.2 1 0.96230 0.96227 2.29E-05
0.25 1 0.96216 0.96227 -1.13E-04
0.5 1 0.96143 0.96227 -8.43E-04

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124167025500107

Matrices

William Ford , in Numerical Linear Algebra with Applications, 2015

1.3 Powers of Matrices

Definition 1.10

(The identity matrix). The n  × n matrix I  =   [δij ], defined by δij   =   1 if i  = j, δij   =   0 if i  j, is called the n  × n identity matrix of order n. In other words, the columns of the identity matrix of order n are the vectors

e 1 = 1 0 0 0 , e 2 = 0 1 0 0 , , e n = 0 0 0 1 .

For example, I = 1 0 0 1 and I = 1 0 0 0 1 0 0 0 1 . The identity matrix plays a critical role in linear algebra. When any n  × n matrix A is multiplied by the identity matrix, either on the left or the right, the result is A. Thus, the identity matrix acts like 1 in the real number system. For example,

2 6 1 7 2 9 1 5 4 1 0 0 0 1 0 0 0 1 = 2 1 + 6 0 + 1 0 2 0 + 6 1 + 1 0 2 0 + 6 0 + 1 1 7 2 9 1 5 4 = 2 6 1 7 2 9 1 5 4 .

Definition 1.11

(kth power of a matrix). If A is an n  × n matrix, we define Ak as follows: A 0  = I and A k = A × A × A A × A A occurs k times for k    1.

For example, A 4  = A  × A  × A  × A. Compute from left to right as follows:

A 2 = A × A , A 3 = A 2 × A , A 4 = A 3 × A .

Example 1.9

The MATLAB exponentiation operator ^ applies to matrices.

> > A = 1 1 ; 1 0 A = 1 1 1 0

> > A 8 ans = 34 21 21 13

A is known as the Fibonacci matrix, since it generates elements from the famous Fibonacci sequence

0 , 1 , 1 , 2 , 3 , 5 , 8 , 13 , 21 , 34 ,

Example 1.10

Let A = 7 4 9 5 . Let's investigate powers of A and see if we can find a formula for An .

A 2 = 7 4 9 5 7 4 9 5 = 13 8 18 11 , A 3 = 13 8 18 11 7 4 9 5 = 19 12 27 17 , A 4 = 19 12 27 17 7 4 9 5 = 25 16 36 23 , A 5 = 25 16 36 23 7 4 9 5 = 31 20 45 29 .

The elements in positions (1, 2) and (2, 1) follow a pattern. The element in position (1, 2) is always 4n, and the element at position (2, 1) is always −   9n. The element at (1, 1) is 6n  +   1, so we only need the pattern for the entry at (2, 2). It is always one (1) more than −   6n, so it has the value 1     6n. Here is the formula for An .

A n = 1 + 6 n 4 n 9 n 1 6 n if n 1 .

This is not a mathematical proof, just an example of pattern recognition. The result can be formally proved using mathematical induction (see Appendix B).

Our final example of matrix powers is a result from graph theory. A graph is a set of vertices and connections between them called edges. You have seen many graphs; for example, a map of the interstate highway system is a graph, as is the airline route map at the back of those boring magazines you find on airline flights. Consider the simple graph in Figure 1.8. A path from one vertex v to another vertex w is a sequence of edges that connect v and w. For instance, here are three paths from A to F: A-B-F, A-B-D-F, and A-B-C-E-B-F. The length of a path between v and w is the number of edges that must be crossed in moving from one to the other. For instance, in our three paths, the first has length 2, the second has length 3, and the third has length 5.

Figure 1.8. Undirected graph.

If a graph has n vertices, the adjacency matrix of the graph is an n  × n matrix that specifies the location of edges. The concept is best illustrated by displaying the adjacency matrix for our six vertex graph, rather than giving a mathematical definition.

A B C D E F Adj = A B C D E F 0 1 0 0 0 0 1 0 1 1 1 1 0 1 0 0 1 0 0 1 0 0 0 1 0 1 1 0 0 0 0 1 0 1 0 0

A one (1) occurs in row A, column B, so there is an edge connecting A and B. Similarly, a one is in row E, column C, so there is an edge connecting E and C. There is no edge between A and D, so row A, column D contains zero (0).

There is a connection between the adjacency matrix of a graph and the number of possible paths between two vertices. Clearly, Adj1 specifies all the paths of length 1 from one vertex to another (an edge).

If Adj is the adjacency matrix for a graph, then Adj k defines the number of possible paths of length k between any two vertices. We will not attempt to prove this, but will use our graph as an example.

Adj 2 = 1 0 1 1 1 1 0 5 1 1 1 1 1 1 2 1 1 1 1 1 1 2 1 1 1 1 1 1 2 1 1 1 1 1 1 2 , Adj 3 = 0 5 1 1 1 1 5 4 6 6 6 6 1 6 2 2 3 2 1 6 2 2 2 3 1 6 3 2 2 2 1 6 2 3 2 2 .

By looking at Adj2, we see that there is one path of length 2 between C and E, C-B-E, and two paths of length 2 connecting E to E (E-C-E, E-B-E). There are five (5) paths of length 3 between B and A (B-A-B-A, B-D-B-A, B-C-B-A, B-E-B-A, B-F-B-A). Note that if we reverse each path of length three from B to A, we have a path that starts at A and ends at B. Look carefully at Adj, Adj2, and Adj3 and notice that the entry at position (i, j) is always the same as the entry at (j, i). Such a matrix is termed symmetric. If you exchange rows and columns, the matrix remains the same. There are many applications of symmetric matrices in science and engineering.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123944351000016

Modeling II

Bernard Liengme , Keith Hekman , in Liengme's Guide to Excel® 2016 for Scientists and Engineers, 2020

Exercise 2: Temperature Profile Using Matrix Algebra

Consider a thin metal sheet (Fig. 15.4) whose edges are maintained at specified temperatures and which is allowed to come to thermal equilibrium. Our task is to compute the approximate temperatures at various positions on the plate.

Fig. 15.4

Fig. 15.4.

We need to make some assumptions. The first is that the two faces of the plate are thermally insulated. Thus there is no heat transfer perpendicular to the plate. The second assumption starts with the mean-value theory, which states: if P is a point on a plate at thermal equilibrium and C is a circle centered on P and completely on the plate, then the temperature at P is the average value of the temperature on the circle. The calculations required to use this theory are formidable, so we will use an approximation. We shall consider a finite number of equidistant points on the plate and use the discrete mean-value theory, which states that the temperature at point P is the average of the temperatures of P's nearest neighbors.

The most convenient way to arrive at the equidistant point is to divide the plate using equally spaced vertical and horizontal lines. In Fig. 15.4, two such lines have been drawn parallel to each axis. This gives four interior points for the calculation. With such a small number, the results will not be very accurate. However, the methodology is the same regardless of the number of points, and it is simpler to describe and test the method initially with four points.

Applying the averaging rule, the temperatures of the four interior points are given by:

t 1 = 100 + t 2 + t 3 + 200 / 4 t 2 = 100 + 100 + t 4 + t 1 / 4 t 3 = t 1 + t 4 + 200 + 200 / 4 t 4 = t 2 + 100 + 200 + t 3 / 4

These can be made more general by replacing the numbers with variables:

t 1 = t 2 + t 3 / 4 + a + d / 4 t 2 = t 4 + t 1 / 4 + a + b / 4 t 3 = t 1 + t 4 / 4 + c + d / 4 t 4 = t 2 + t 3 / 4 + b + c / 4

To facilitate the use of a matrix method, we will write each in a more systematic form:

t 1 = 0.00 t 1 + 0.25 t 2 + 0.25 t 3 + 0.00 t 4 + a + d / 4 t 2 = 0.25 t 1 + 0.00 t 2 + 0.00 t 3 + 0.25 t 4 + a + b / 4 t 3 = 0.25 t 1 + 0.00 t 2 + 0.00 t 3 + 0.25 t 4 + c + d / 4 t 4 = 0.00 t 1 + 0.25 t 2 + 0.25 t 3 + 0.00 t 4 + b + c / 4

This has given us a system of four equations in the form T  = MT  + B where

T = t 1 t 2 t 3 t 4 , M = 0 0.25 0.25 0 0.25 0 0 0.25 0.25 0 0 0.25 0 0.25 0.25 0 , B = a + d / 4 a + b / 4 c + d / 4 b + c / 4

We can rearrange the equation in this way:

T MT = B

I M T = B

T = I M 1 B

Matrix I is the identity matrix, a matrix in which diagonal elements have values of 1 and off-diagonal elements values of 0. If A is a matrix then we speak of A   1 as its inverse.

(a)

On Sheet2 of Chap15.xlsm, enter all the text shown in Fig. 15.5.

Fig. 15.5

Fig. 15.5.

(b)

To define the problem, enter the temperature values in A4:D4. Name these cells with the text above them.

(c)

Enter the values of the M matrix as shown in A7:D11.

(d)

Enter the values for the Unit matrix in F8:I11 either by typing the numbers (it is OK to type just the 1s and leave blank cells for the zeros) or by selecting F8:I11, entering 1 the formula =   MUNIT(4) and committing it with

Image 1
  +
Image 2
  +
Image 3
.
(e)

Now we make the (I  M) matrix: in K8 enter the formula =   F8 – A8. Copy this to K8:N11.

(f)

To compute [I  M]  1 select A15:D18, type the formula =   MINVERS(K8:I11) and use

Image 4
  +
Image 5
  +
Image 6
to complete the array formula.
(g)

The formulas for the B matrix are as follows:

F14: =(SideA + SideD)/4
F15: =(SideA + SideB)/4
F16: =(SideC + SideD)/4
F17: =(SideB + SideC)/4
(h)

All that remains is to multiply [I  M]  1 by B. With I15:I18 selected, enter =   MMULT(A15:D18, F15:F18) and commit the array formula with

Image 7
  +
Image 8
  +
Image 9
.
(i)

There is an alternative method: Omit generating the (I  M)  1 matrix and find the T matrix with the nested formula =   MMULT(MINVERSE(K8:N11),F15:F18).

(j)

Save the workbook.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128182499000157

A REVIEW OF SOME BASIC CONCEPTS AND RESULTS FROM THEORETICAL LINEAR ALGEBRA

BISWA NATH DATTA , in Numerical Methods for Linear Control Systems, 2004

The Frobenius norm and all subordinate norms are consistent.

Notes

1.

For the identity matrix I, | | I | | F = n , whereas ||I||1 = ||I||2 = ||I|| = 1.

2.

| | A | | F 2 = trace ( A * A ) , where trace (A) is defined as the sum of the diagonal entries of A, that is, if A = (a ij ), then trace (A) = a 11 + a 22 + … + a nn . The trace of A will, sometimes, be denoted by Tr(A) or tr(A).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780122035906500069

Numerical Solutions to the Navier-Stokes Equation

Bastian E. Rapp , in Microfluidics: Modelling, Mechanics and Mathematics, 2017

28.2.3 Sequential Dense LU-Decomposition

Rewriting to the Correct Format. Before turning to a numerical solution, we first rewrite Eq. 28.2 such that we can extract the matrix A and the right-hand side b . For this we find

(Eq. 28.3) F ( y , z + 1 ) + F ( y , z 1 ) + F ( y + 1 , z ) + F ( y 1 , z ) 4 F ( y , z ) = h 2 η d p d x

As we can see, the right-hand side is a constant, which is why we can simply normalize it to 1. We then find the following four equations

v x ( 2 , 3 ) + v x ( 2 , 1 ) + v x ( 3 , 2 ) + v x ( 1 , 2 ) 4 v x ( 2 , 2 ) = 1 v x ( 2 , 4 ) + v x ( 2 , 2 ) + v x ( 3 , 3 ) + v x ( 1 , 3 ) 4 v x ( 2 , 3 ) = 1 v x ( 3 , 3 ) + v x ( 3 , 1 ) + v x ( 4 , 2 ) + v x ( 2 , 2 ) 4 v x ( 3 , 2 ) = 1 v x ( 3 , 4 ) + v x ( 3 , 2 ) + v x ( 4 , 3 ) + v x ( 2 , 3 ) 4 v x ( 3 , 3 ) = 1

Obviously, several of these values are zero because they are boundary values. If we remove these, we obtain

v x ( 2 , 3 ) + v x ( 3 , 2 ) 4 v x ( 2 , 2 ) = 1 v x ( 2 , 2 ) + v x ( 3 , 3 ) 4 v x ( 2 , 3 ) = 1 v x ( 3 , 3 ) + v x ( 2 , 2 ) 4 v x ( 3 , 2 ) = 1 v x ( 3 , 2 ) + v x ( 2 , 3 ) 4 v x ( 3 , 3 ) = 1

which are four unknowns and four equations. We construct the following linear system from this

(Eq. 28.4) ( 4 1 1 0 1 4 0 1 1 0 4 1 0 1 1 4 ) ( v x ( 2 , 3 ) v x ( 2 , 3 ) v x ( 3 , 2 ) v x ( 3 , 3 ) ) = ( 1 1 1 1 ) A x = b

General Rule for H × W Meshes. If you look at A in Eq. 28.4, you can see that there is a certain pattern. In general, the matrix for a H × W mesh with H being the number of elements in the y-direction and W being the number of elements in the z-direction, will have a total number of H · W rows and columns. It consists of two types of submatrices of type

(Eq. 28.5) A 1 = ( 4 1 0 0 0 0 0 0 1 4 1 0 0 0 0 0 0 1 4 1 0 0 0 0 0 0 0 0 0 1 4 1 0 0 0 0 0 0 1 4 )

I W = ( 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 )

where the second submatrix is the identity matrix IW . The first matrix A1 is the matrix that implements Eq. 28.3 for column y in row z. Obviously, we still require the entries for the row above y − 1 and below y  +   1, which we obtain by placing the two identity matrices before and after the submatrix A1 . The matrix A is built using the following rule

Initialize an empty matrix with H · W × H · W entries. Set all values to zero.

Using the row index z and the column index y, place the submatrix A1 at (y  =   1, z  =   1) in the matrix A.

Place an identity matrix IW as (y  = W  +   1, z  =   1) just next to the submatrix A1 . This matrix accounts for the entry just below the mesh value (y, z). In the first line, there is no value to account for above (y, z), so this completes the first line of the mesh.

Place the submatrix A1 at (y  =   1, z  = W  +   1) in the matrix A.

Place an identity matrix "before" the submatrix A1 (y  =   1, z  = W  +   1) in the matrix A. This accounts for the value above the mesh entry (y, z).

Place an identity matrix "after" the submatrix A1 (y  =   2 W  +   1, z  = W  +   1) in the matrix A. This accounts for the value below the mesh entry (y, z).

Repeat this process until the last line.

In the last line, omit the identity matrix after submatrix A1 , as there is no value to account for below (y, z  = H).

Vector b . As we normalized the right-hand side, the vector b has a value of 1 for each of its W · H entries.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781455731411500289

Attitude Determination

Enrico Canuto , ... Carlos Perez Montenegro , in Spacecraft Dynamics and Control, 2018

Hint

The result comes by factorizing the identity matrix in Eq. (10.55) as the product of the last expression and of the inverse of the Wahba's covariance in Eq. (10.172), as exemplified in the following series of identities:

(10.130) I 3 = m 1 m 1 T + m 2 m 2 T ( m 1 · m 2 ) ( m 1 m 2 T + m 2 m 1 T ) + ( m 1 × m 2 ) ( m 1 × m 2 ) T | m 1 × m 2 | 2 = ( 1 σ ˜ 1 2 ( I m 1 m 1 T ) + 1 σ ˜ 2 2 ( I m 2 m 2 T ) ) P ˜ W A H B A .

As expected, the optimal estimate of the problem of Wahba is more efficient than any TRIAD estimate, unless σ ˜ 1 0 in Eq. (10.129), which agrees with Theorem 2 of Section 10.3.2.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780081007006000106

Online Predictive Control: GoRoSoBo

Enrique Bonet Gil , in Experimental Design and Verification of a Centralized Controller for Irrigation Canals, 2018

8.3.3 Implemented algorithm

We used the Levenberg-Marquardt method because this method is robust and applicable to ill-conditioned matrices such as the Hessian matrix. The HIM is a positive definite matrix but it is also an ill-conditioned matrix. The approximation of the Hessian matrix, obtained by multiplying the HIM by its transposition, has a worse condition number than the HIM. As noted, we use the Levenberg-Marquardt method which introduces a value (ν) in the diagonal of the Hessian matrix to improve the condition number of the matrix

[8.15] H = UU 2 J U + ν I )

where [I ] is the identity matrix and ν > 0 is a real value named the Marquardt coefficient. The steps defined in the implemented algorithm are shown in the next flowchart which is a type of diagram that represents every process, showing the steps as boxes and their order by connecting them with arrows. This representation illustrates a solution to the optimization problem. Process operations are represented in these boxes and the sequencing of operations by the arrows. Before showing the flowchart, we have to define some matrices and variables that are used in the flowchart:

[Q'] is the weight matrix.

[C] is the discrete observer matrix (a matrix of zeros and ones).

r  =   [C]X ki kF (U i )   Y*   s is the residual vector.

X ki kF (U i ) is the state vector.

– [I M ]   = I M [X ki kF (U i )] is the Hydraulic Influence Matrix, all evaluated in U i .

All the processes of the algorithm are defined in each loop, where the solution U* is calculated in each regulation period (i) for a predictive horizon.

Figure 8.13

Figure 8.13. GoRoSoBo scheduled: initial version for the unconstrained optimization problem algorithm

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781785483073500082