Things are getting more and more interesting here. The red circle that appeared on the file as the result of the last chapters putting it together part was cool. The result that is promised at the end of this chapter is cooler still 🙂 Now we are about to add the light and shading to make the drawing to represent a real three dimensional object.
The truth is that most ray tracers favor approximations over physically accurate simulations, so that to shade any point, you only need to know four vectors.
Jamis Buck
The quote from the book doesn’t sound that bad. Only four vectors to master shade! On the page 76 (pdf version) there is this nice figure that demonstrates the thing. If I put my self into the place of observer on that image the vectors are like this:
E = eye vector, it is from the point P that I am looking at pointing straight into my eye (autch).
L = light vector, from the point P I am looking at to the point of the light source.
N = surface normal, this vector is perpendicular to the surface at that point P that I am looking at.
R = reflection vector, the direction where the light would bounce when it comes from the light source.
To calculate E we just negate the ray’s direction vector. Negating vector kind of flips the vector. To calculate the L we need to know lights position. Since we can decide in our ray tracer where the light will be we would probably know it 🙂 . All we need to do is to subtract the point P from the position of our light source and that would be the L. So far everything seems to be understandable.
Surface Normals
A surface normal (or just normal) is a vector that points perpendicular to a surface at a given point.
Jamis Buck
I’m not going to explain the normal any other way than what is explained in the book. There is nice figures in the book that demonstrates the concept of normal. In this case one picture tells “easily” more that thousand words.
Computing the Normal on a Sphere
On thing about normals is that normals are always represented as unit vectors, since the purpose of a normal is only to represent a direction. To keep normals as unit vectors makes things cleaner.
The first tests consists of four cases where the normal on a sphere is tested and the fifth test basically checks that the normal is a unit vector.
To implement the normatAt function all that is needed to done at this point is to subtract the origin of the sphere (which is (0,0,0)) from the point that we are interested and normalise that vector. Still everything seems quite understandable. Here is the implementation:
Transforming Normals
Previous cases where the easy ones because all the spheres were located at the origin. That will not be the case when constructing an actual scene. To calculate the normals for a point on a sphere that is located in actual world space we need to do some extra work.
If the sphere is in the world space it needs to be converted back to object space. In object space it’s easy to calculate the normal because in object space the spheres centre is in origin (0,0,0). The last step is to transform that calculated normal back to world space.
To do the normal transformation back to world space so that the normals will hold the perpendicularity the normals in object space must be multiplied with the transpose of the inverse of the transform matrix. (Whoah)
For me things are getting little tricky here. I mean it’s easy to follow the book and get the code work but to actually understand this last step is a bit hard. So I’m not going to stuck in to it. I count on that I will catch up that later. Anyway the normalAt function implementation looks like this:
Reflecting Vectors
The normal that we calculated in the last step plays a central role now when we need to reflect a vector. Again there is these nice figures in the book that shows what reflection means. If I try to express reflection in an alternative way I might say that: “When vector hits a surface at a certain point it’s the points normal that defines what the reflection vector will be.”
So now we will create a function newRGReflect. It will return a new vector representing the reflection based on the input vector and normal vector. Here are the two test cases.
After converting the pseudocode from book to Swift code the reflection function will look like this:
The Phong Reflection Model
In the book Jamis uses the Phong reflection model. I am not very aware of the different shading techniques but even I have heard this Phong reflection model before 🙂
In this model there is three different types of lighting. First there is this ambient reflection which means how the other objects in an environment reflects light and it is considered as a constant in Phong model. I guess this makes things much easier since in real world there is all kinds of different types of materials and surfaces that reflects light differently and to recon all these individually makes everything much harder to put together.
Second there is diffuse reflection which depends only on the angle between the light source and surface normal. Its purpose is to reflect the light from matte surfaces.
The third is specular reflection and it’s the reflection of the light source it self. This is something that is controlled by the parameter called shininess.
The higher the shininess, the smaller and tighter the specular highlight.
Jamis Buck
All different attributes could be calculated separately but to make some use of them all of them must be put together. There is nice illustration in the book that demonstrates just that.
To reflect a light we need a source for the light. To have a light we create a point light. Point light is a simple struct defined like this (test case included):
To create something concrete we will still need a data structure called material that will hold the color of the material and all the Phong reflection model attributes. Here is the implementation, the test case could be found in the source code.
A material on its own is not useful so we need to attach it into our sphere. So we could think this like if the sphere knows what material it is made of it knows how to reflect the light that cast on it.
The very last thing is to create a lighting function. In my code it’s called newRGLighting and it returns a new RGColor object. It will take as input the point on the sphere we are interested, the location where our eye is (where the observation point is), the material object that defines the material in that point we are observing and the point light object aka the light source. Based on the given data the function calculates how the point at hand should be shaded or rendered in given context.
There are quite many test cases that test different setups so I’m not going to copy paste all of them here. The function that is based on the books pseudocode is shown bellow.
I think this is very fascinating. You put some data together, make some math with it and you get a color that reflects the color in a realistic environment. Now we have everything we need to render a shaded object.
Putting It Together
The putting it together could be found in this chapters playground. I have commented the parts that are specific for this chapter so it should be easy to follow. Compared to the red circle we get in the end of the previous chapter (and that was nice too) this one looks way more better. The end result is below and I must say it was really rewarding to get the result right 🙂