Tuesday, March 27, 2012

Graphics : Basic Shaders & Final Project issue

I haven’t yet made a blog covering my coding in graphics so now is the perfect time to note my progress. Basically over the course of this semester we have been learning about shaders, and using the “CG” library in order to incorporate them. In the previous semesters we used a combination of C++ and Open GL code, and now we needed to add in expertise in the use of shaders.

I’ll be honest in that I haven’t fully wrapped my head around how they worked completely, but I have a good idea how a lot of them work. Basically they can do all sorts of different effects that OpegnGL would never have been able to do alone. For example, OpenGL does have basic lighting, which we implemented in our game first semester, but it’s not very good looking, nor does it include shadows by default.

Overview of Shading

With shaders, you can implement something more attuned to real lighting, incorporating ambient lighting (the natural light on the scene), emissive light (the light coming off an object in the scene to the camera), diffuse lighting (the light from the light source onto the objects on the scene) and specular lighting (essentially the shininess of an object). To incorporate this using shaders is essentially using a variety of formulas, being passed along and calculating the color in the scene. It’s pretty much just formulas that do all the effects, though of course the complexity of the formulas will differ and some effects will require more special needs.


Lighting in with Shaders

One thing to note is that shaders will be using our GPU, which is built for these complex calculations, instead of making all the calculations of our program solely on the CPU. This means we will have more overall memory to work with and will result in a faster overall game.

First off lets see a basic overview of how shaders are done. Our shader has two pass throughs normally, a vertex and fragment. What vertex does is uses calculations based on points and vertices in the objects in the scene, which makes it better for certain calculations, such as lighting on a high polygon model. Fragment on the other hand, does calculations passed on every pixel in the screen, so it can be used effectively for an effect such as blur (Which basically takes every pixel on the screen and averages them out to their neighbouring pixels).


Blur effect

It’s possible to try the same effect with vertex or through fragment but the results will vary. For one, using lighting with vertex will be calculating all those lighting calculations based on every vertex in the objects being lit, and fragment will just do the lighting on the pixels afterwards. So for example, if we have a low poly model, that means there will be these massive triangles all over that will be lit in such a way that it might look less smooth than fragment, which won’t take into effect the fact it’s a low poly model at all. On the other hand if the model is high poly, more depth will be shown in a vertex shader while the fragment shader will look more flat because it wont take into effect the massive amount of triangles forming the shape of that model.


Vertex lighting on the left, fragment on the right

Now when we program, we need to declare vertex and fragment programs, and link them through a number of functions from our main coding area, to “cg” program files that hold all the calculations that our shaders will do. You have to link any shader file that will do calculations you want, from your main coding area. After that, it’s just a matter of formulas and number of files to pass through to make your shaders work.


Bloom effect

Some use one pass through and it’s done, such as just making the entire scene blue by changing every pixel color, while others such as Bloom are more complex, using multi passes through multiple shader files (Using a blur, then a bright pass to highlight bright values, another pass to combine those two, then finally add them to our current image). With multiple shader effects you might have a lot of pass throughs and files so it can get quite complex.

Anyways I just wanted to give a quick over view of shaders for those that don’t know anything at all about them, I more so wanted to talk about an experience in the coding of my Game Development Workshop (Final project) game that caused an issue but now it’s been fully resolved.

Basics of Viewing

Before I talk about what happened with our issue, let’s just talk about how we view an image on screen. First off we would have our camera position, so where we are essentially positioned and looking towards. From there we have “modeling” which represents the objects we place in the area we are looking, so something like placing a tea pot inside the scene.

From there we have “projection” which represents the perspective (defining the volume and spatial relation of you seeing the objects) and also removing anything not currently in the screen from view. Projection also includes choosing whether you want to show depth (3D) or orthographic (meaning, if object A, is farther than object B, they will still appear the same size because no depth into the screen is taken into account).

Then finally we have our viewport, which defines the window we draw in, the width and height of it. The reason why I am mentioning these is so that one can understand just what’s happening in all this coding stuff. It will be important to understand the story.


Here is a break down of the entire process in order

 One more definition I want to add is “modelView” I said this was how objects are placed in the scene. The “modelview Matrix” is important to note because the “matrix” part of this defines how the objects are placed in a scene. Depending on the matrix, it can rotate and object, and translate it’s position in different areas of the scene. This “modelView Matrix” is the key problem we encountered in our story, so let’s start with the story now.

Here is the story of what happened…

We’ve been working on shaders throughout the semester, learning the various effects that can be done through them such as blurring, lighting, shadows, etc. We decided that we would now try to implement the shaders into our game now. We had a plan, because we had discovered there was an issue involving shaders and openGL (the API we are working with). The problem was that the shader need to get updated with the positions of all our objects in the world but openGL’s translate and rotation functions we used would not automatically update them.

The main problem is that our game uses glTranslate (openGL’s translate function) and glRotate (openGL’s rotate function) for all our objects in order to place them properly in the world. Our entire engine is based off that so we needed to make sure we were able to get by this and adapt our engine to work with shaders. We tried the first method that came to mind, which was to convert everything into modelview matrix form and use matrix multiplication to change the positions of the objects in the modelview matrix. For the most part the code we decided to change worked, the positions of objects updated the same way they used to when we used glTranslate and glRotate.

However even when we implemented the shader code in, what we didn’t want to happen occurred. All the objects loaded, and it still didn’t update their positions or rotations correctly. The modelview matrix was still being saved but for some reason it wasn’t being passed to the shader correctly. We even tested to make sure the positions of characters and objects weren’t out of whack and they were fine. We were able to move around the world and get attacked by the monsters wandering around just like normal, except that you could see that all those objects were still being drawn all on the exact same spot, even though in reality they were wandering all around the world.

Here you can see a teapot and a whole bunch of other objects stuck together

We spent much of the day trying to get past this and it took a lot of time but we figured out what the problem is. We know exactly where we need to call the shader to update the modelview matrix now. Because we first tried to just update it right after our game updates the level, which didn’t work. The solution we discovered was to update the shader with the modelview matrix right after we translate and rotate an object. This discovery means we should be able to still use the glTranslate and glRotate functions because these functions are actually supposed to change our modelviewMatrix anyways. So basically what it would like is…

PlayerEntity:: Draw Object()
{
     glRotate(angle, x,y,z)
     glTranslatef(x,y,z)
     UPDATE SHADER(modelviewMatrix)
     Draw object
}

So we just need to make sure that cg is able to access the spot where entites are rotated, translated then drawn. From there we can update the shader at that point. This should work because since glTranslate and glRotate change the modelview Matrix, updating the shader right after these would apply the now changed model view matrix. We even tested it in an external program by updating before and after the translate and rotate. Putting it before them made it so the object remained in the default starting position, meanwhile putting it after would actually apply the change in a program that was using CG. We have now tested and now have shaders fully in our game, our hypothesis was correct, we simply had to make sure we could pass the modelview matrix and then we could update all the positions of the world correctly and with the right lighting.


How things should look (with shaders off). Everything is positioned correctly


No comments:

Post a Comment