Cross product matrix pdf
This formula is not as difficult to remember as it might at first appear to be. First, the terms alternate in sign and notice that the 2x2 is missing the column below the standard basis vector that multiplies it as well as the row of standard basis vectors.
This method says to take the determinant as listed above and then copy the first two columns onto the end as shown below. We now have three diagonals that move from left to right and three diagonals that move from right to left. We multiply along each diagonal and add those that move from left to right and subtract those that move from right to left.
This is best seen in an example. Notice that switching the order of the vectors in the cross product simply changed all the signs in the result. Note as well that this means that the two cross products will point in exactly opposite directions since they only differ by a sign. There is also a geometric interpretation of the cross product. There should be a natural question at this point. First, as this figure implies, the cross product is orthogonal to both of the original vectors.
The one way that we know to get an orthogonal vector is to take a cross product. So, if we could find two vectors that we knew were in the plane and took the cross product of these two vectors we know that the cross product would be orthogonal to both the vectors. However, since both the vectors are in the plane the cross product would then also be orthogonal to the plane.
So, we need two vectors that are in the plane. This is where the points come into the problem. Since all three points lie in the plane any vector between them must also be in the plane. There are many ways to get two vectors between these points. We will use the following two,.
The cross product of these two vectors will be orthogonal to the plane. This does give us another test for parallel vectors however. The determinant in the last fact is computed in the same way that the cross product is computed. We will see an example of this computation shortly. The paper also shows that is possible to evaluate a symmetric matrix having the same set of eigenvectors of a non-symmetric matrix and provides, without demonstration, the formula for the skew-symmetric "tilde"-matrix, which performs the cross- product in n-dimensional space.
From Eq. What is interesting for this study is case c , that is, the case of di perpendicular to vi. This fact implies that the vector vi must be perpendicular not only to the di vector but also to all the dk vectors and therefore to all the row vectors of the matrix D.
This implies that, for a simple singular matrix, all the vi vectors are perpendicular to all the row-vectors diT and, as a consequence, all the vi vectors are parallel. Hereafter, Eq. This implies the minimum value for its condition number, which is one. In fact the condition number of a matrix is the ratio of the largest singular value to the smallest and the orthogonal matrices have all their singular values equal to one. For purpose of inversion, for instance, a matrix is ill-conditioned or poorly conditioned small errors on data imply high errors on solution if its condition number is large.
A matrix with the condition number equal to one is called perfectly conditioned. For this matrix any vector is an eigenvector and, therefore, any matrix e. Its computation is therefore unnecessary. The square matrix M0 is formed with the dij elements identified by the n-m rows whose indices are stored in the "Used Rows" UR integer vector and by the n-m columns whose indices are stored in the "Used Columns" UC integer vector of matrix D.
The following numerical example is included to clarify the described procedure. In these complex cases, however, a software able to deal with complex operation, is needed. When such a software tool is not available the complex case must be transformed into a form which considers only real operations. This is the subject of this paragraph. Therefore both matrices Mk and Dk increase in size, at each loop, by two. This alternate procedure has the undoubtable advantage, as compared with that using Eq.
Numerical tests have shown, however, that the condition number of the eigenvector matrix so computed is quite good; further studies are needed to obtain the solution with the minimum condition number. The next numerical example will clarify the described alternate procedure. Let J be the generic i th Jordan block. The value of n j represents the length of the chain associated with the true eigenvector vj.
Therefore the associated computed eigenvector wk, represented by the first n elements of the hk 1 solution, must be equal to the true eigenvector v1. Hence, Eq. As is easily be observed, this substitution is such that it excludes from the system of Eq. This modification allows direct computation of the generalized eigenvector wk as well as the upper- diagonal coupling elements sk.
The described procedure can be clarified by the next numerical example. The condition of having unitary upper-diagonal elements brings to an alternate procedure, based on the fact that the D r matrix has full rank. This paper does not include a solution technique for the derogatory matrices which would complete the analysis for defective matrices , since it is still unknown.
In general, matrix D is non-symmetric. Therefore D can be replaced by S when the preferred form is symmetric. In particular, such a matrix has non-negative eigenvalues coincident with its singular values while the eigenvectors matrix is equal to both its right and left eigenvectors matrices.
This eigenvalue can be computed by premultiplying Eq. T 2 44 Consequently, the unit-vector v, which is an eigenvector of D associated with its zero eigenvalue, is also an eigenvector of the symmetric matrix S associated with its zero eigenvalue. This technique can be applied to a real or complex matrix as well as to a real matrix having complex conjugate eigenvalues. The paper then provides, when a software tool able to deal with complex operation is not available, two different algorithms for the complex problem which considers only real operations.
One of them provides the computation of orthogonal eigenvectors. Then, in order to extend the application to the defective matrices, an algorithm for computing all the generalized eigenvectors associated with a true eigenvector of a full-defective matrix, as well as the upper-diagonal coupling elements, has been developed. Because of the complexity of the proposed techniques and for the sake of clarity, the present paper emphasizes the devised procedures by including numerical examples.
0コメント