Fix weight factors of the mass matrix.#3
Fix weight factors of the mass matrix.#3castano wants to merge 1 commit intonvpro-samples:masterfrom
Conversation
If each of the equations: Cj0 * bar.x + Cj1 * bar.y + Cj2 * bar.z = Si in [Mij] [Cj] = [Si] is weighted by w. When we take the product: [Mji][Mij] [Cj] = [Mji][Si] The w factors in [Mji][Mij] and [Mji][Si] should be squared, since they appear in both sides of the product. That is: [Wi][Mij] [Cj] = [Wi][Si] => [Mji][Wi]^2[Mij] [Cj] = [Mji][Wi]^2[Si]
|
I just found this implementation and comparing it against mine noticed the difference in weighting factors. I'm not familiar with this code base. Maybe the dA's are squared elsewhere? |
|
Hi. I don't really understand the comment, but it's been a long time since I looked at this code. The lines you pointed out with dA in them are intended to be building the matrix A in equations (4) and (7) of the reference paper, "Least Square Vertex Baking", by Kavan et al, 2011. In section 3.1, the paper gives an analytic expression for the components of A in terms of areas of triangles incident to vertices or edges. I found however that sampling the terms gave a better result. Is [Mij] in your comment the same as [Aij] from the paper? If so, then I don't see where in the paper [Aij] is being squared. Equation 7 sets up Ax=b, then the rest of the paper solves for x numerically. The dA terms in equation 4, which are "dp" in the integral, are not squared. Do you think there is a mistake in the paper, or just the code? Please explain further using notation from the paper if possible. |
|
I do not use the analytic construction either. Let me try to clean up the notation. You have m vertices and n samples. This gives you n equations as follows: c_a * bary_i_x + c_b * bary_i_y + c_c * bary_i_z = x_i x_i is the sample color, with i in [0,n). Expressed in matrix form we have a rectangular matrix A, a vector c, and a vector x. Ac = s This equation system is overdetermined, we want to solve it in the least squares sense, with the error of each sample proportional to the area of the triangle divided by the number of samples: e=W(Ax−c) e is the error vector. Minimizing this: eTe=(Ax−c)^T W^T W(Ax−c) results in: A^T W^T W A x= A^T W^T W c Therefore the weights used in the least squares matrix you are building (triplet_map) need to be squared (W^T * W). I hope this makes sense! |
|
Please use notation directly from section 3 of this paper when responding: I think I followed your derivation up to the point where you introduce the "W" weights. I don't understand where W comes from. There are no extra weights needed in a typical least squares solution, e.g., from Wikipedia. The error is not "proportional" to Ax-c; this is the error. Using your notation but following the Wikipedia reference, I think it would go something like: E = Ax - c // your error term above, without the W // now square the error, E: // then take the derivative of E with respect to x and set it to 0 to find the minimum, assuming there is a single minimum: ... and finally solve this sparse, complete, system for x. This is exactly what the paper does, although they define variables cleverly so that the final equation to be solved is 'Ax=b', equation (7) from the paper. The A matrix from the paper is A' = (A^T)A from your notation. The entries of the A matrix from the paper as defined in equation (4) come out of the barycentric coordinates, the differential areas of the samples, and the substitution of variables, not any other added weights. |
If each of the equations:
Cj0 * bar.x + Cj1 * bar.y + Cj2 * bar.z = Si
in [Mij] [Cj] = [Si]
is weighted by w. When we take the product:
[Mji][Mij] [Cj] = [Mji][Si]
The w factors in [Mji][Mij] and [Mji][Si] should be squared, since they appear in both sides of the product. That is:
[Wi][Mij] [Cj] = [Wi][Si] => [Mji][Wi]^2[Mij] [Cj] = [Mji][Wi]^2[Si]