🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

QR Algorithm & Eigen value order

Started by
11 comments, last by Dirk Gregorius 5 years, 11 months ago

Hi All,

I'm using the QR algorithm to break a matrix down into eigen vectors & eigen values. The code i'm using looks something like this:


QRAlgorithmResult QRAlgorithm(Matrix S) {
  Matrix A = S; // Eigen values
  Matrix Ev = Matrix(); // Eigen vectors
  
  for (int i = 0; i < num_iteration; ++i) {
    QRDecompResult QR = QRDecompose(A)
    Ev = Ev * QR.Q
    A = QR.R * QR.Q
  }
  
  Vector eigenvalues ( a[0], a[4], a[8] )
  Vector[] eigenvectors = Vector[] {
    Vector(Ev[0], Ev[1], Ev[2]),
    Vector(Ev[3], EV[4], EV[5]),
    Vector(Ev[6], Ev[7], Ev[8])
  }
  
  return QRAlgorithmResult(eigenvectors, eigenvalues)
}

Problem is, the most significant eigen values are always first! So, if i have a matrix where the eigen values represent scale, and the scale i'm looking for is (1, 15, 1); the resulting eigenvalues would be (15, 1, 1). The eigen vectors are in the same order. This is the matrix that results in the above error:


 1.00000,   0.00000, 0.00000, 0.00000,
 0.00000,  15.00000, 0.00000, 0.00000,
 0.00000,   0.00000, 1.00000, 0.00000,
 0.00000,   0.00000, 0.00000, 1.00000

So, my question is. How can i re-order the results, so that 15 is where i expect it to be? I've been trying to figure this out for a few days now, but i've got nothing :(

Advertisement

"So, my question is. How can i re-order the results, so that 15 is where i expect it to be?" What are you expecting exactly?

The matrix that's passed in, S contains some scaling and skewing info. I'm trying to decompose it so i can figure out just the scale. I'd expect the result to (1, 15, 1), where the eigen value for 15 stays on the y axis.

Looking at the resulting eigen vectors & values it's obvious that 15 belongs to the y basis:


eigenvectors: [
	[0, 1, 0],
	[-1, 0, 0],
	[0, 1, 0]
]
eigenvalues: [15, 1, 1]

But this is a. best case scenario. Doing another matrix, like:


  1.59727,  0.15045,    0, 0,
 -1.20363,  0.19966,    0, 0,
         0,       0, 0.25, 0,
         0,       0,    0, 1

Will yield not so obvious results:


eigenvectors: [
	[0.79864, -0.60182, 0],
	[0.60182,  0.79864, 0],
	[0, 0, 1]
]
eigenvalues: [2, 0.25, 0.25]

 

I'm expecting that i have to potentially re-order both my A and Ev matrices on each iteration, i'm just not sure what the re-order logic should be. The reason i'm assuming that i'll have to re-order the matrices on each iteration is the first example, where 15 shifts to what i would consider the "wrong" spot, that starts to happen right around iteration 4.

I assume QRDecompose() sorts the results. You need to change the source code to not sort. If you don't have access to source you need to do it yourself. As a side note, QR decomposition is not the best method for decomposing affine transformations as used in games or animation. You might want to check this out:

http://research.cs.wisc.edu/graphics/Courses/838-s2002/Papers/polar-decomp.pdf

Source code is available as well.

 

I'm not doing any sorting at the moment. As far as i understand, the sorting is just a natural thing that is expected to happen. 

Looking at the "Example (Basic QR-Method)" on page 2 & 3 of this article article, i assumed this statement to be true:

Quote

A_{k} approaches a triangular matrix where the diagonal elements are ordered by (descending) magnitude.

And i'm just trying to undo the ordering that naturally occurs there. For context, the source for my QRDecompose function looks like this:


QRDecompResult QRDecompose(Matrix A) {
  Vector x = (A[0], A[1], A[2])
  Vector y = (A[3], A[4], A[5])
  Vector z = (A[6], A[7], A[8])

  // gram schmidt - Orthogonalize
  y = y - Projection(x, y)
  z = (z - Projection(x, z)) -  Projection(y, z)

  x = Normalize(x)
  y = Normalize(y)
  z = Normalize(z)

  Matrix Q(
    x[0], x[1], x[2],
    y[0], y[1], y[2],
    z[0], z[1], z[2]
  )

  Matrix R = Transpose(Q) * A

  return QRDecompResult(Q, R);
}

Shoemake's paper makes sense up to the "Direct Stretch Animation" part, he 100% looses me there. I'm pretty sure that section describes the "snuggle" function from here, but the code is even harder to read than the article ?

This code won't actually run in a game, i'm not too worried about speed. I'm mostly trying to make something that's easy to read and understand. For me, this method checks both those boxes.

EDIT: I'm thinking, if the eigen values are ordered in descending order, could i sort the basis vectors of the original matrix that is being decomposed by length? Would that sorting result in the same order as the eigen value order? If it does, this would be a trivial problem to solve.

EDIT2: Is shoemake's paper saying that the basis vector in the original matrix each eigenvector corresponds to is the one for which the two share the smallest angle?

Sorry, I would need to look this up myself. I noticed one thing in your implementation though. I would use a cross product for the z-axis instead of the Gram-Schmidt projection. I think this is actually common for 3D matrices as it is numerically more robust.

Another thing I would think about is whether a full decomposition is actually needed. If your matrix has simply the form M = R * S (assuming column vectors and S being a diagonal scale matrix) then you can trivially extract the scale simply by computing the length of of each column. No decomposition needed. If you have an arbitrary transformation with shear and eventually reflection, then QR decomposition will be not sufficient IIRC.

Thanks for the cross product suggestion! I didn't consider that, but it makes a lot of sense!

I'm actually implementing the affine decomposition shoemake writes about in Graphics Gems 4. All of it is working, except this last step (the SpectoralDecomposition step, i'm just using the QR Algorithm to do it), getting the right values just the "wrong" order.

The full context of what i'm trying to do is this:


AffineDecompositionResult AffineDecomposition(Matrix M) {
    FactorTranslationResult factorTranslation = FactorTranslation(M);
    PolarDecompResult polarDecomposition = PolarDecomposition(factorTranslation.X);
    FactorRotationResult factorRotation = FactorRotation(polarDecomposition.Q);
    SpectoralDecompositionResult spectoralDecomp = SpectoralDecomposition(polarDecomposition.S);
    
    // Everything in here are matrices
    AffineDecompositionResult result;
    result.T = factorTranslation.T;
    result.F = factorRotation.F;
    result.R = factorRotation.R;
    result.U = spectoralDecomp.U;
    result.K = spectoralDecomp.K;
    result.Ut = Transpose(U);

    return result;
}

I tried the second method mentioned in my last post, which was to pair each eigenvector with the basis vector whom it has the smallest angle with. It kind of works (if all elements in the eigenvector are positive). I don't feel confident in it at all, and i'm really just stabbing in the dark with that :(

 

Did you consider writing an email to Shoemake? Over the years I often contacted researchers directly and most of the time they replied back. These people usually put quite some effort into their research and are usually happy to discuss their work.

 

That's a fantastic idea, but i can't find up to date contact info for him anywhere :( 

But, after reading trough the "Matrix Animation and Polar Decomposition" a few more times, i finally get what the "snuggle" function is doing. And implementing an un-optimized version of it is fairly straight forward. I have an implementation in place that mostly works right now. But there is one thing i could use some help with.

The paper just takes all axis permutations of the input matrix, and checks which one has the smallest rotation (largest w component of the quaternion representing the matrix). There are supposed to be 24 permutations, i have 48.... The permutations are supposed to be:

Quote

All axis permutations (6) times all axis sign combinations (8), achievable by a rotation (divide by 2).

My 48 axis come from the six permutations of every axis, done each time for all axis sign combinations. (6 * 8). What i'm not getting is the "achievable by a rotation" part. Seems like that should cut the number of permutations i have in half.... Which axis / sign combinations are not achievable by a rotation? For example, if we take all permutations of the x axis being in the top row


// Two permutations with x on top
[ x[0], x[1], x[2],
  y[0], y[1], y[2],
  z[0], z[1], z[1] ],
[ x[0], x[1], x[2],
  z[0], z[1], z[2],
  y[0], y[1], y[2] ]
// 14 potential combination of negative axis
[ -x[0], -x[1], -x[2],
   y[0],  y[1],  y[2],
   z[0],  z[1],  z[2] ]
[ -x[0], -x[1], -x[2],
  -y[0], -y[1], -y[2],
   z[0],  z[1],  z[2] ],
[ -x[0], -x[1], -x[2],
  -y[0], -y[1], -y[2],
  -z[0], -z[1], -z[2] ],
// .... Rest of it are omitted, but there are a bunch of these!

Which of those are not achievable by a rotation (and why?)

I think you can cut out those permutations that would change the handness of the rotation frame. You cannot rotate from a RH system into LH system I guess. 

This topic is closed to new replies.

Advertisement