Remembering the times back when I was in the high school. I wish that some one would have had the wisdom to motivate me by telling that matrices are crucial when doing 3D stuff on computers. It might have helped to keep the motivation up what comes to studying math π
So I’m not going to focus on how to implement the matrix datastrcutures. Like in the previous post I will use the simd framework which have all the building blocks that are needed. There is one twist between the book and the simd. The book is talking matrices as row based. So when indexing matrix in the book the number of rows comes first and the number of columns comes second.
Well the simd matrices works opposite. First comes the columns and then the rows. So to create matrix like is:
the book uses notation like M21 to index the value at third row on a second column which is 10. (Rows and columns indexing starts at zero). In my code the the same is equal to M12 where the column comes first and then the row. I’m not actually sure if I can use these simd data structures like this just by taking this difference in account.
By the way there is over 17 test cases in this chapter so I’m not going to put them all here. They can be easily found in the source code. I concentrate on those points that I think were challenging for me or where I think I found some easy or maybe clean way to implement some part. So lets dive in.
Creating a Matrix
As mentioned above the simd matrices are column based. This next test case is an example of creating and testing 2×2 matrix but the same principles are used with 3×3 and 4×4 matices.
So basically in the case above what is needed is two tuples with 2 values in them. For matrices of sizes 2×2 and 3×3 we use simd types float2 and float3 to create the columns in our case. For 4×4 matrices we can use the RGTupe as well. It’s the alias for float4.
Multiply Matrices
Multiplication is one of the key elements when dealing with 3D graphics in computers. By using multiplication we could “join” together different transformations. There are transformations like scaling, rotation, etc. and each of them could be represented by a matrix.
Basically when we have a some vector we could multiply that with a matrix that represents for example the scaling matrix. After the multiplication we will have the scaled vector as our result. Then we could multiply that with another matrix representing maybe the rotation and again we get a new vector that is also a result of that rotation.
By doing things this way works but there is easier solution. We could multiply all those transformation matrices together and then multiply our vector with that one matrix to represent the same effect as doing all the steps separately. This is more efficient and shows the usefulness of matrix multiplication. I think this is really amazing thing with matrices. It’s like grouping multiple answers together and then answer to all questions an once π
For multiplication I will use the building functionality. Just create two matrices and to multiply use * like with scalars. The code is very simple.
In this test case we also need to test if the result of the multiplication is same as the matrix given in the book. For the testing I use the simd simd_equal() function. In this test case this works but there will be case a where that function is NOT OK. We will come to that later.
The Identity Matrix
The identity matrix is like the number 1, but for matrices.
Jamis Buck
To generate identity matrix I can use the build in functionality form simd or to make it a bit more convenient I created a extension to the RGMatrix4x4. Here are both techniques. The latter uses internal the first technique mentioned.
Transposing Matrices
For transposing matrix I will use also the simd build in functionality. The test case is easy to follow just check the code. What transposing is is something I am not going to explain here. You should read the book. In Wikipedia site there is actually a quite good animation that visualises what happens in traposing.
Invertin Matrices
What inverting means is basically that when multiplying matrices like A with B we get C and to go back from C to A we multiply C with the inverse of B. So that’s why inverse is the keyword here. So B:s inverse matrix is what is needed.
Determinant
First thing we need to get inverse matrix is a determinant so that must be tested. I will continue to use the build in functionality here. The test case for calculating determinant from 2×2 matrix is like this:
Submatrix
The goal is to invert matrix and next thing needed is submatrix. This is something that is left when single column and row is deleted form the matrix. So the size of the matrix is also changed. When deleting a row and column from 4×4 matrix the result is 3×3 matrix and so on.
Thereβs no magic there, and not even really any math.
Jamis Buck
So this must be easy…well not that easy. Not for me anyway. When I tried to think how to construct the new matrix after deleting row and column I had to take a break, breaks. Well two breaks…no three…not sure if there were more of them. Lots of coffee was needed any way. So the solution at the end is probably no the best but it works π
For the submatrix functionality there is no direct method in simd. So I needed to create new method and I will implement it as an extension. Extensions are really great feature in Swift. You can easily add functionality to existing datatypes without messing up the original datatype.
I wanted to implement this functionality so that the method is easy to use. In the books submatrix function interface submatrix(matrix, row, column) you could easily think that row and column arguments are of integer type. But what if a call is made using for example a row value that is greater than the actual number of rows? Error checking is needed or then make the implementation to return nil if input is not valid or something else.
I decided to create custom Swift enum to limit the scope of the input. In Swift language the enums are quite capable for all sorts of things. Here is the enums for this purpose.
In the test case I can then use these enums in the function call (function implementation will be explained later). Test case for 3×3 to 2×2 matrix is as follows:
In my case the vectors are the columns. This means that the values in the vector as rows. Same for all the data types from float2 to float3 and to RGTuple which is same as float4. First thing to implement is a method that returns float2 from float3 after removing the selected row. Implementation is here:
Equal approach is used for other vector datatypes and this is a place where I can use the RGPosition3 enum. I think this is a quite simple implementation.
Next thing is to implement the actual submatrix method. Well the same kind of technique could be used with matrix. I show you the code first so it’s easier to explain.
Same RGPosition3 could be used here because the dimensional limits are the same for rows and for columns. The inputs for the method is column and row I want to get rid off.
Swift switch case block is used to check first what column is to be deleted. The resulting RGMatrix2x2 is constructed from the remaining two columns. Then the method created earlier for the vector types is used to remove the specific row.The exactly same approach is used for the RGMatrix4x4 too.
Minors
The minor of an element at row i and column j is the determinant of the submatrix at (i,j).
Jamis Buck
I have already the functionality to create submatrix and the determinant so this is easy part. Just put those two thing to gather in a method of it’s own. So there nothing fancy here. The code is easy to follow. To mention here the result of a minor is a scalar value.
Cofactor
In short the cofactor is based on the knowledge that the sum of the row and column of a minor is either odd or even. If it’s odd the cofactor is negated minor and if it’s even the cofactor is same as minor.
The test case you find in the code but here is the implementation. It’s implemented using the good old extension. Internaly it uses the minor method and then just uses basic if else bock to either negate the minor or return the original.
At this point the book says to test the determinant of the larger matrices. I found that even when there is no methods for submatrix or cofactor in simd there is however method (actually a calculated property) for determinant of float4x4 matrix. I bet the simd implementation is good to go with.
What the determinant is needed for is to find if the matrix have inverse matrix at all. If determinant is 0 then there is no such thing and if it is anything else then there is.
The ultimate goal was to somehow calculate the inverse matrix. And guess what. For that purpose there is also build in stored property in simd. So I am not going to make this any harder at this point. I will count on that. And here is the test:
Aha! Even now when I use the simd:s build in inverse functionality I have to do the comparison of the two matrices. Well this was the thing I said earlier that I will “explain later”. Previously when comparing matrices the values had been precise so the comparison had worked well using the build in simd_equla() function for matrices. But at the case above the floating point values are not exactly equal to what are given and what the inverse returns. That why I have to implement the isEqaulTo method for the matrices. This is how it looks:
It looks quite the same for the 3×3 matrix as well.
Last thing is to test the original problem. The A * B = C and C * inverse(B) = A. So that is here and it’s the final piece.
Whoah. Lots of stuff. Even without implementing all the functionality by my self there was lots of things to get familiar with. For now I will omit the “Putting it together” section. I will add it here later. You should check the source code. There on place in the code where I used the row based init method to create a matrix to follow the books approach. I leave it to you to find it π Until next time.