Showing posts with label infr2350. Show all posts
Showing posts with label infr2350. Show all posts

Tuesday, March 27, 2012

Graphics : Basic Shaders & Final Project issue

I haven’t yet made a blog covering my coding in graphics so now is the perfect time to note my progress. Basically over the course of this semester we have been learning about shaders, and using the “CG” library in order to incorporate them. In the previous semesters we used a combination of C++ and Open GL code, and now we needed to add in expertise in the use of shaders.

I’ll be honest in that I haven’t fully wrapped my head around how they worked completely, but I have a good idea how a lot of them work. Basically they can do all sorts of different effects that OpegnGL would never have been able to do alone. For example, OpenGL does have basic lighting, which we implemented in our game first semester, but it’s not very good looking, nor does it include shadows by default.

Overview of Shading

With shaders, you can implement something more attuned to real lighting, incorporating ambient lighting (the natural light on the scene), emissive light (the light coming off an object in the scene to the camera), diffuse lighting (the light from the light source onto the objects on the scene) and specular lighting (essentially the shininess of an object). To incorporate this using shaders is essentially using a variety of formulas, being passed along and calculating the color in the scene. It’s pretty much just formulas that do all the effects, though of course the complexity of the formulas will differ and some effects will require more special needs.


Lighting in with Shaders

One thing to note is that shaders will be using our GPU, which is built for these complex calculations, instead of making all the calculations of our program solely on the CPU. This means we will have more overall memory to work with and will result in a faster overall game.

First off lets see a basic overview of how shaders are done. Our shader has two pass throughs normally, a vertex and fragment. What vertex does is uses calculations based on points and vertices in the objects in the scene, which makes it better for certain calculations, such as lighting on a high polygon model. Fragment on the other hand, does calculations passed on every pixel in the screen, so it can be used effectively for an effect such as blur (Which basically takes every pixel on the screen and averages them out to their neighbouring pixels).


Blur effect

It’s possible to try the same effect with vertex or through fragment but the results will vary. For one, using lighting with vertex will be calculating all those lighting calculations based on every vertex in the objects being lit, and fragment will just do the lighting on the pixels afterwards. So for example, if we have a low poly model, that means there will be these massive triangles all over that will be lit in such a way that it might look less smooth than fragment, which won’t take into effect the fact it’s a low poly model at all. On the other hand if the model is high poly, more depth will be shown in a vertex shader while the fragment shader will look more flat because it wont take into effect the massive amount of triangles forming the shape of that model.


Vertex lighting on the left, fragment on the right

Now when we program, we need to declare vertex and fragment programs, and link them through a number of functions from our main coding area, to “cg” program files that hold all the calculations that our shaders will do. You have to link any shader file that will do calculations you want, from your main coding area. After that, it’s just a matter of formulas and number of files to pass through to make your shaders work.


Bloom effect

Some use one pass through and it’s done, such as just making the entire scene blue by changing every pixel color, while others such as Bloom are more complex, using multi passes through multiple shader files (Using a blur, then a bright pass to highlight bright values, another pass to combine those two, then finally add them to our current image). With multiple shader effects you might have a lot of pass throughs and files so it can get quite complex.

Anyways I just wanted to give a quick over view of shaders for those that don’t know anything at all about them, I more so wanted to talk about an experience in the coding of my Game Development Workshop (Final project) game that caused an issue but now it’s been fully resolved.

Basics of Viewing

Before I talk about what happened with our issue, let’s just talk about how we view an image on screen. First off we would have our camera position, so where we are essentially positioned and looking towards. From there we have “modeling” which represents the objects we place in the area we are looking, so something like placing a tea pot inside the scene.

From there we have “projection” which represents the perspective (defining the volume and spatial relation of you seeing the objects) and also removing anything not currently in the screen from view. Projection also includes choosing whether you want to show depth (3D) or orthographic (meaning, if object A, is farther than object B, they will still appear the same size because no depth into the screen is taken into account).

Then finally we have our viewport, which defines the window we draw in, the width and height of it. The reason why I am mentioning these is so that one can understand just what’s happening in all this coding stuff. It will be important to understand the story.


Here is a break down of the entire process in order

 One more definition I want to add is “modelView” I said this was how objects are placed in the scene. The “modelview Matrix” is important to note because the “matrix” part of this defines how the objects are placed in a scene. Depending on the matrix, it can rotate and object, and translate it’s position in different areas of the scene. This “modelView Matrix” is the key problem we encountered in our story, so let’s start with the story now.

Here is the story of what happened…

We’ve been working on shaders throughout the semester, learning the various effects that can be done through them such as blurring, lighting, shadows, etc. We decided that we would now try to implement the shaders into our game now. We had a plan, because we had discovered there was an issue involving shaders and openGL (the API we are working with). The problem was that the shader need to get updated with the positions of all our objects in the world but openGL’s translate and rotation functions we used would not automatically update them.

The main problem is that our game uses glTranslate (openGL’s translate function) and glRotate (openGL’s rotate function) for all our objects in order to place them properly in the world. Our entire engine is based off that so we needed to make sure we were able to get by this and adapt our engine to work with shaders. We tried the first method that came to mind, which was to convert everything into modelview matrix form and use matrix multiplication to change the positions of the objects in the modelview matrix. For the most part the code we decided to change worked, the positions of objects updated the same way they used to when we used glTranslate and glRotate.

However even when we implemented the shader code in, what we didn’t want to happen occurred. All the objects loaded, and it still didn’t update their positions or rotations correctly. The modelview matrix was still being saved but for some reason it wasn’t being passed to the shader correctly. We even tested to make sure the positions of characters and objects weren’t out of whack and they were fine. We were able to move around the world and get attacked by the monsters wandering around just like normal, except that you could see that all those objects were still being drawn all on the exact same spot, even though in reality they were wandering all around the world.

Here you can see a teapot and a whole bunch of other objects stuck together

We spent much of the day trying to get past this and it took a lot of time but we figured out what the problem is. We know exactly where we need to call the shader to update the modelview matrix now. Because we first tried to just update it right after our game updates the level, which didn’t work. The solution we discovered was to update the shader with the modelview matrix right after we translate and rotate an object. This discovery means we should be able to still use the glTranslate and glRotate functions because these functions are actually supposed to change our modelviewMatrix anyways. So basically what it would like is…

PlayerEntity:: Draw Object()
{
     glRotate(angle, x,y,z)
     glTranslatef(x,y,z)
     UPDATE SHADER(modelviewMatrix)
     Draw object
}

So we just need to make sure that cg is able to access the spot where entites are rotated, translated then drawn. From there we can update the shader at that point. This should work because since glTranslate and glRotate change the modelview Matrix, updating the shader right after these would apply the now changed model view matrix. We even tested it in an external program by updating before and after the translate and rotate. Putting it before them made it so the object remained in the default starting position, meanwhile putting it after would actually apply the change in a program that was using CG. We have now tested and now have shaders fully in our game, our hypothesis was correct, we simply had to make sure we could pass the modelview matrix and then we could update all the positions of the world correctly and with the right lighting.


How things should look (with shaders off). Everything is positioned correctly


Wednesday, March 21, 2012

Computer Graphics: Graphics in Games Week 11 - Sprites vs. Models

In the very first games, hardware wasn’t very advanced. We didn’t have the capability to show 3D models, but we certainly had the capability to show 2D though.  Since we couldn’t do high tech, 3D model rendering then we used hand-drawn sprites in many games until we had the capability. The sprite look is a classic, retro look, not commonly used in games but recently there has been a significant increase in the number of pixel games, many of them being indie games but others from large companies as well. For a while sprite based games looked like they would mostly become a thing of the past but their popularity has increased in these last years. We’ll be taking a look at how sprites have been used in games and compare their advantages and disadvantages vs. 3D models.

Evolution of technology

Pong had a limited color palette of white and black

Like I’ve said, sprites have been used for a long time, since the very beginning. Due to computation limits, it was easier just to render a 2D sprite then calculate a model’s vertices and triangles and it still is. But sprites also had limitations themselves. Computers and entertainment systems could only show so many colors at the time, the first ones using only shades of grey. Eventually we evolved to 8-bit, 16-bit, and so on, so forth. Not only that but the resolution of screens also had to increase from a much smaller amount. This meant the sprites we designed were limited in color, and also in size. If we wanted to give the right proportions of a character sprite compare to a large building sprite, we had to make really small looking sprites and try to squeeze as much detail into them as we could.

Super Mario World had multi-layered backgrounds

Technology evolved, limitations of sprites began to cease and we were able to create vibrant worlds and even create immersive effects such as “layering sprites”. An effect like this would be to layer multiple sprites in the background, the closer backgrounds moving faster as we move, while the farthest move slower or not at all. This would really make the world look a lot more immersive. A lot of other neat techniques came with the use of sprites but sprites began to take a back seat once technology had reached peak where it could display 3D models.

Mario 64 still had sprites despite being a 3D game

Models got popular and many franchises went to these types of games. There was just something very appealing about the increased depth of a game now, that sprites simply couldn’t do at the time. That’s not to say sprites still didn’t play a role in these games though. If you take a look at Super Mario 64, there are actually sprites around in the levels still. More specifically, they used it in particle systems, such as the snow.

Doom was a 3D game with 2D sprites

Also to note is some games used a 3D engine but everything was sprites still. The most notably example of this is Doom. It was a completely 3D environment yet everything in the game consisted of 3D sprites. But models were still becoming popular.

Mario Kart 64 used sprites for some of the items like the green shell while having 3D models elesehwere

In any case technology continued to increase, it became easier to calculate 3D models and advanced physics that would appropriately render the 3D models. Sprites were not advancing nearly as fast, sure enough we had more computation and calculations, and maybe even the process of creating sprites increased in efficiency, but it wasn’t moving as fast as 3D models. Sure sprites had physics, such as gravity and other forces, but non that could affect their limbs and such like 3D models could. Sprites were and are still used in 3D games however due to their easy calculations but more and more advanced games use less and less sprites or even none. It seemed like for a while sprite based games were becoming obsolete. Keep in mind, I am talking about in mainstream games, I am not focusing on the many online, free games on various sites. I am talking about ones made by indie developers or large companies, the big sellers.

Scott Pilgrim vs. the World and other sprite based games began to pop up in the recent years

Then suddenly this huge surge of games came out, and many of them actually followed a marketing strategy of “Retro throwback”. Take Scott Pilgrim vs. the World, the game was a complete throwback to the days of retro gaming. It came complete with retro sound effects and an overall vivid color palette by passing any old limitations of old NES games though. But retro it was and it appealed to a lot of old gamers who missed the old sprite based games. After this it seemed like a massive number of sprites games came out and nowadays you can hardly tell sprite based games really disappeared for a while (in terms of being sold, not counting flash based games). There are a ton of games from both large companies and smaller companies releasing sprite based all the time now. Sprites are definitely back in, but for how long? Will they keep up with 3D games? Well let’s see the advantages and disadvantages of them.

Advantages of Sprites

Street Fighter 2 HD Remix: The dimensions of this sprite of Ken are approximately 430 x 395

Well like I said earlier, Sprites are easier to calculate, let’s get into the specifics of why that is. Sprites contain only a certain number of coordinates, and come with a size of X and Y. It has no “vertices” that make it up, it can essentially just be a square of pixels. That in itself immediately makes it easier to calculate then a model. Models are composed of vertices, representing points on the model, which are connected to each other to form triangles. You need to calculate these and connect the triangles to form a model, this is just the basics, we haven’t even counted surface normals (to form a solid figure, right now we would just have a wireframe), texture coordinates, faces, etc. It’s easy to see that there are way more calculations involved with models then there are with sprites. That is the very reason why sprites are used in backgrounds to save memory.

There are many more complex calculations associated with models 

This applies also for lighting, because 3D models require good lighting in order to look good. Sprites on the other hand do not have any lighting (At least they aren’t actually required to) since the “lighting” is drawn right into the sprites. That makes it so you don’t have to make any calculations for sprites, however this goes into one of the disadvantages as well.  Before I go into this disadvantage I want to point out that textures have the same application as lighting. Models need them but the texture is drawn right into sprites on the other hand.


3D models need good lighting and textures which are more calculations. Sprites do not need these, it's "drawn" into the sprite itself. 
 

Disadvantages of Sprites


Blazblue : Ragna's sprites don't need texture or lighting, it's drawn right in

Now then, that disadvantage is the fact that sprites need to be hand drawn. Sprites need to have the lighting already drawn into the sprite as well as any texture. There aren’t really any systems that layer a texture on top, at least none that I know of. Also another disadvantage is that models, once created can be placed and shifted to form animations, while sprites need to be hand drawn on every frame of it’s animation to look right. This can be very difficult, though there are tools to help but there aren’t many.


Blazblue: Noel requires hundreds of hand drawn frames for her animations

The best results for sprites are usually when every sprite is hand drawn by professionals, and this can take an immense amount of time depending on the quality of the sprite. A simple sprite can be really easy and quick to finish but a detailed sprite such as those in Blazblue can take a massive amount of time. The difference between making a model and making a sprite also depends on the person, because the process of creating both is entirely different. Being a good modeller does not make you a good sprite artist and vice versa.

Conclusion

So we can see that, calculations wise, sprites are much less costly than models, that much was obvious. And even in games with models, sprites can be used rather effectively if masked well enough. They are still used for particle effects even to this day. The quality of sprites may very though depending on the artist, and if you’re creating a character with a wide range of animations, it can actually be easier to model a character than sprite him. This is due to it being potentially easier to animate since you’re not “redrawing” the model every time like a sprite requires. You need to have a good sense of perspective, animation, scale and proportions when making a sprite being animated in complex ways (like in Street Fighter or Blazblue). This can create a higher skill cap when trying to make sprites like these and can look less smooth than their model counterparts even after all the effort put in. Not to say models don’t require effort either, but it’s using formulas and interpolation to animate rather than an artist’s skill in making sprites. If an artist wants to make a character look smoother, than they have to draw more frames for the sprite, where as a modeler need only position then correctly.

Marvel vs Capcom 3: It's easier to make alternate costumes for 3D models. It can be just a texture swap or add on. Animations can remain the same. 2D sprites can't do this nearly as easily.

So, besides the fact that calculations were easier with sprites, it can debatably require more effort to get into a game depending on the kind of sprite. There is a reason why most games go for models now. For one because most games that want to go for a 3D environment would want to look realistic anyways and sprites wouldn’t fit the game. Another reason is due to the advancements in technology and modellers, that models can look better much more easily. There are many tools out there like Maya that can allow anyone to make a model. A sprite has a larger artistic skill cap in order to make the truly great sprite animations. But there is also a feeling of nostalgia associated with sprites for many, which is the reason why they are selling once again.

Mortal Kombat uses people, takes pictures of them, and turns them into sprites

One way developers have gotten past the issues of making so many sprites for animations is to use an effect like Mortal Kombat or Donkey Kong Country. Basically they made 3D models (OR took real people) but then took pictures of the models/people, frame by frame and made the animations that way. This way, they could use the easier calculations of sprites vs models. They also would be able to easily make the animations for each frame of a sprite as well since all they do is move a pose. This is one of the ways in the past they have used to try and combine the best of both, and the effect is alright but not quite up to today's standards in terms of quality. But it has a retro look of sorts.

Shadow Complex (3D) on a 2D plane
vs
Castlevania (2D) on a... 2D plane

In this day and age, due to advancements in technology, it’s not hard for a person to make a model and 3D game, but with enough dedication you can make sprites to make a 2D or even 3D game. A lot of the choice between sprites or models depends on the type of game you want to give to the player and the type of feeling you want the game to resonate. If you compare a game like Shadow Complex, a 3D game, on a 2D plane, it invokes a similar different feeling compared to playing a sprite game such as Castlevania just by the aesthetics of the sprites. It’s all about the feeling.

Wednesday, March 14, 2012

Graphics in Games 10 : Realistic vs. Non Realistic Portrayal in games

In the newest generation of graphics, we now have the technology to portray extremely realistic characters. But both today and in the past, games did not always strive to portray realistic looking characters, sometimes preferring a more stylized look. There are a huge number of different stylized looks however and can vastly vary in terms of look and feel compared to realistic portrayals. Sometimes they can even try and combine the two together to try and give a unique look to the game. Today I would like to try and compare the two different styles and the benefits and cons of both.

Realistic 
vs
Stylized



Quick Definition

First off let’s just define how we might talk about realistic portrayal. There are a large number of games that have realistic lighting, effects, graphics and models, though perhaps the models have proportions that aren’t entirely correct. I will be ignoring the fact that their model’s proportions aren’t quite correct because the game in question shows all signs of otherwise trying to show a realistic environment and world. This doesn`t mean it has to be a real world setting of course, it just means it looks realistic .

Also to be fair, most games that have characters with very whack proportions in every single character(I am talking super giant heads) will most likely NOT be realistic and be aiming for stylized. If there are effects such as a cell shader that makes the entire world look more cartoonish even with properly proportioned models, then I would put it in Non-realistic portrayal. I think it will be obvious to tell when a game goes for realism and another goes for a more stylized look.

Realistic

Mass Effect has consistently shown great realistic visuals

There have always been games that strive to make it the next revolutionary step in graphics and try to showcase their games in tech demos. Today there are a lot more games that strive for this goal and many more succeed now to the increased technology we now possess. Games such as Crysis, Gears of War, Mass Effect, Call of Duty, Battle Field, Final Fantasy, Bayonetta, Tekken and so on, way too many to count, possess the strive to go for such realistic looking settings. Their use of lighting, particle effects, facial animation, and other effects give them a very jaw dropping look if done correctly. Like I said there are a ton of games out there that already try and use a realistic look now. There are just as many games that try to go for a more realistic look that try to portray a more unique style, perhaps more, I am not entirely sure there are way too many games being released.

Battlefield 3 provides some very realistic visuals

So what’s an advantage of going for realistic? Well there is personal preference out there, and there are many people out there who enjoy a more realistic looking setting. Some love developers going for the next generation of graphics and being awed by how real a game can look. Depending on the game in question, getting realistic graphics is pretty much necessary. Games such as BattleField, being a war simulator essentially and having a reputation to keep, needs to try and look as realistic as possible to cater to it’s fans.  Choosing a realistic setting in this case helps enhance the war simulation feel and enhance the overall experience of the game. I doubt the majority of fans would want to play the game looking all cell shaded with large headed characters (though there would of course be some that would).

Crysis has been a benchmark for PCs since 2007

With use of bump mapping, lightmapping, shadow mapping and other effects to show texture and lighting, realistic graphics become a great possibility and much easier to do in the past. There are some very cool effects to enhance the effect of scenes and make them look more realistic or enhanced and many easy to do. Techniques such as improving facial animation are also much greater than in the past, games such as L.A. Noire showing very great facial animation. However I can’t say this point here can only be a positive for realistic, it would be a positive point for unrealistic because they also rely on create effects to enhance the feel of their game. Basically, advancing technology helps both sides of the coin and can easily show massive improvements on both sides.

How L.A. Noire uses facial animation

Before we get into disadvantages, let’s look more into Non-Realistic so that we can compare the two more easily.

Non-Realistic
Bomberman`s rather cartoonish look

There are plenty of non-realistic games out there with plenty of stunning effects that look “realistic” but overall the feel of the game aims towards a non realistic look. Just take a look at Mario Galaxy, there are plenty of cool particle effects, but the overall lighting, style of the characters and more simplified texture looks and color palette make it go towards non-realistic. Speaking of Mario, a lot of Nintendo games go for this kind of look due to the cartoonish, happy go lucky nature of their characters and to appeal to a wider audience. Going for a realistic Mario might be a turn off for some younger children.


Continuing on this point, non-realistic lends itself to a much wider range of different styles. This allows you to appeal to different audiences in different ways. Again, going back to Mario, he needs a non-realistic look, he needs to provide a bright, happy, inviting environment with quirky looking characters that would never look right in a realistic fashion or simply couldn’t exist. Sure, seeing a realistic looking Bowser might be cool but it might scare some children. This is why non-realistic works, it provides the opportunity for looks that will truly appeal to certain audiences. Like any good cartoon, they stick with that certain style that gives their characters their unique feel. Trying to change it to a more realistic look can look weird and make fans sad.

Bomberman tried to take the more realistic approach and failed

Again like I said in the realistic section, improving effects, improving hardware have allowed greater effects in all games. No matter what style of game, improving technology will always enhance the feel of the game. Improving technology means easier and newer ways of calculating techniques such as cell shading, which makes the game look more cartoony, and with newer ways could mean more efficient ways. This means more opportunity to add other effects or just enhance the overall frame rate. Once again this can also apply on the realistic side too.

Dragon Ball Z games have always gone for a cell shaded look in order to look like the show

Comparison

So we saw some examples of what these both provide in terms of aesthetics but let’s talk about comparing them in a more technical manner. In this comparison, it’s to see which is overall easier to implement in a game since we already know the benefits of using either one. I will be awarding “points” to whoever I think has the advantage in each category. Let’s start with animation.

Animation

Uncharted has consistently had some great animations

We all know that realistic games would try and go for realistic looking animations, or at least try their best to. Really good examples of this might be Uncharted 1,2 and 3. Drake’s animations have a lot of very realistic qualities to them thanks to the techniques the developers used. But now the thing is a lot of effort MUST be made to make his animations look good. Being a high budget game by a well known studio, when they are aiming for realistic look they have to put a lot of effort into capturing and animating the character to look really good. Otherwise it would take players out the experience, because even with a realistic looking setting, if the characters don’t match then there will be a loss of immersion. I can say this, in a Call of Duty game, I always frown at how the characters run in a multiplayer match because despite the game going for realistic, they run so strange and it takes me out of the experience. This especially applies for facial animation in cutscene heavy games. A really bad facial animation system can lead to some very bad cases of loss of immersion.

Due to the lego style, a skeleton structure to match an entire human isn`t required to bring these characters to life

Meanwhile on the unrealistic end, even with a low budget studio aiming for a certain style, they have more lee way with their animations. This is because if they are aiming for a funny walk for a cartoonish character, it doesn’t need to have all the detail a normal human would require to walk normally. They can spend overall less resources to achieve the certain look because they didn’t define their animation to look like a human or a creature with certain weight. Granted they can still aim for it and they shouldn’t aim for bad animations either, but overall they can spend less resources to satisfy their needs compared to realistic games.

I think I have to give it to unrealistic games because there are many cases where less resources would be needed to make the animations work with a particular character and not lose immersion inside of the game.

Realistic 0 || Unrealistic 1.

Models and Textures
Batman Arkham City needed to make sure the detail on Batman and his various rivals had sufficient detail to represent the iconic characters

It’s a similar case for models and textures as it is for animation. Realistic, once again has to put much more effort into making sure the characters look realistic. Model proportions must strive to be correct (Though it’s still acceptable in some cases) but usually games striving for pure realism in their human characters will strive to try and get these. Texture quality is now getting higher standards by this generation and so realistic games must strive to try and match it lest they cause some disappointment. This means a lot of effort must be made to ensure some great looking textures onto the character and mapping it all out onto the character and creating those textures can be quite a task.

Tales of Vesperia uses a cell shaded style that doesn`t need as much detail in textures

On the other hand for unrealistic games, depending on the particular style, you could even lack textures based on the style you choose. A game with a cell shaded style like Tales of Vesperia means there is less need for textures on the game, because you can see its mostly solid color with some minor textures due to the nature of cell shading. That means less overall effort required to implement the textures for this particular game. For model proportions, less effort again in a lot of cases. Characters don’t need to follow a strict human form all the time and changing their proportions is not as big of a deal as it is for realistic. This can also lead to very simplistic characters such as Kirby who would overall need very little effort in terms of both model and animation compared to a realistic human.

Overall unrealistic models and textures would probably require less effort in many cases.

Realistic 0 ||  Unrealistic 2.

Lighting
God of War III had great lighting effects

In terms of lighting, a lot of new technology has emerged that makes realistic games look truly fantastic. But once again a lot more precise calculations will usually be required for realistic games. Games like God of War 3 had some very crazy good lighting, which is required to make the games look more realistic. Bump mapping requires good lighting in order to look good and realistic characters simply won’t look realistic without good lighting. This means this is essentially a massive requirement and a lot of effort must be made to make the lighting look good for a game. If you take a look at Mass Effect 2, there were some instances where, even though the lighting was good, it caused some problems.



Once again for unrealistic, it’s not always required to have the best lighting. Based on the type of game, if going for a simpler, cell shaded look then you won’t need as much lighting effects. Granted though, that the effects can be processed after some very high level lighting calculations, which essentially means it can still take a lot of effort. However this will vary depending on the game and the techniques used by the developers.

However like I said, realistic NEEDs good lighting in all cases or it will fail or look bad. Therefore I will have to give the point once again to unrealistic.

Realistic 0 || Unrealistic 3.

Particle Effects

This is where I might want to call a tie between the two, because unrealistic can go to lengths to make some very good particle effects just as much as realistic. Both of them can strive for amazing looking particle effects that look realistic. You can say that unrealistic can just go for ones that fit with their style and go for lower level particles and this is true. Realistic would probably have to try and go for the best in order to fit with their game and make the game look, well realistic. I don’t want to give this again to unrealistic because a lot of unrealistic games use a lot of great particle effects that take as much calculation as a realistic game, but on the other hand a realistic game needs to look real and needs to look good.

But I will just give the point to no one for this case to be nice.

Realistic 0 || Unrealistic 3

Conclusion of Comparison

So we can see that overall unrealistic and more stylized games are probably easier to implement in general and this should be true. A lot more effort is required to make games look like real life but that’s not to say a lot of effort isn’t put into the stylized games as well. They can require just as much effort. But the comparison shows that since a lot more effort is needed for a realistic look, it also means that a larger budget and more time would be required. This can be bad for indie developers (which is what I might be come) and smaller companies with a lower budget. We don’t have the resources to go for a realistic engine at times which means that overall, going for an unrealistic look can overall be more beneficial.



Skyward Sword went to a stylized look rather than realistic

Take a look at Legend of Zelda: Skyward Sword on the Wii. The Wii has never been a power house but Skyward Sword uses a painting like style to give it a unique feel, where as it’s predecessor, Twilight Princess went realistic. Also take a look at Windwaker with it’s cellshaded style aged better then Twilight Princess. I know I went over this point before in another post but I want to reinforce this. The unrealistic look that was 3 years older aged better than the realistic look that Twilight Princess went for. Also Skyward Sword pushed the Wii to the limit and used a style that took advantage of hardware limits. We have never seen a very realistic looking game on the Wii simply because it doesn’t have the power to do so. Going unrealistic helps us get over these disadvantages.

I hate repeating myself but Windwaker aged better than Twilight Princess did

I think it’s fair to say that overall there are currently more advantages for unrealistic looking games over realistic looking games, but I enjoy them both. I love both types of games and I hope to see both evolve and grow. But for low budget or low tech consoles then unrealistic is probably the best way to go.

Tuesday, March 6, 2012

Cinematics in Games : First person vs. Third person

Cinematics have been in games for a long time, since even the earliest days. They have always been an essential tool for helping to tell a story in a game but there have been different ways of using it. One of my previous blog posts covered real-time cutscenes vs. Pre-rendered cutscenes, which focused more on the technical aspects rather than the immersive aspects of them. In this blog post I want to talk about the different ways of using cinematics, first person or third person. When I say third person I usually mean in the typical fashion that is normally seen in games, looking like a film, taking the perspective of a camera watching the action unfolding in front of you. For example, you would typically see these in all final fantasy games. For first person, well it’s rather obvious, you take a first person perspective. The action unfolds in front of your very eyes and sometimes you have control, sometimes you don’t but you never leave the body of your character as they watch the actions unfold. This is typically seen in the Elder Scrolls, Oblivion and Skyrim clearly demonstrated this. So now let’s see what each of them offers?

Third Person

We’ve known these kind of cinematics for the longest time, they’ve appeared in far more games than first person style cinematics have. This is pretty much the default cinematic approach, to take the approach of using a camera just like a film would use. So that means it’s not exactly and original approach to take but many games can live or die by these. Bad cutscenes and poor camera work with these kinds of cinematics may make players want to skip the cutscenes and therefore lose immersion and interest in the story. Some games hardly focus on these, using poor camera work or no real camera work at all. These are the games that players want to skip over. Meanwhile other games such as Final Fantasy and Metal gear Solid show great camera work and cinematic quality, taking angles a film maker might want to take and shower great detail and care into making their cinematics worthy of film.

The long opening cinematic of Metal Gear Solid 4

Camera work and programming

Camera work can require quite a bit of effort, especially for in game cutscenes. Camera work needs to match the action on screen and a variety of effects (such as blurring) to help enhance the effect of the action on screen. Programming wise this means quite a bit of calculations and positioning to place the camera in the right places and at the right moments. A lot of timing is required and no doubt storyboards and some sort of process would be used to try and determine how the cinematics might look like. Not to mention Metal Gear Solid and Final Fantasy have a ton of cinematics all over the game (Metal Gear Solid 4 being famous/infamous for 30 min+ long cutscenes). A lot of care and detail is made into making the cinematics match the flow and tone of the game and make them interesting enough to watch. If it was just a static camera during every cutscene, people would be even more inclined to try and skip over them.

Mass Effect 2's opening sequence

The Experience

These kinds of cinematics create a film like quality, that some may or may not like. I for one enjoy cinematics of these kinds if they are done well enough. Certain series are able to offer consistently good quality to make these cinematics a pleasure for me to watch. There is a reason games need a team dedicated to cinematics if they are known for providing a lot of them.

Final Fantasy XIII-2 opening cutscene

So depending on the company and game, these kinds of cinematics can be rather bad if they can’t put a large enough budget or no effort into them. It makes them a bore to go through and gets one less engaged in the experience. If done right they can provide great dramatic moments that films provide and help tell the story in ways that normal gameplay wouldn’t be able to. Or can it?

First Person

These are the new kids on the block, not every game uses these and not all of them are good, but a number of them excel in making use of these kinds of cinematics. First off, to qualify as a first person cutscene is that, most of the control is taken away from you. You are still guided and have some input into the scene at hand but for the most part things will always lead the to the same conclusion whether you decide to act or not.

Skyrim's first person, opening cinematic

Actually there aren’t a whole lot that greatly dedicate themselves to these kinds of moments. I am looking at Skyrim, Call of Duty, Bioshock as my examples though, I can’t quite think of many others. I will take a closer look at Bioschock since it’s really the best example of this.
 In Bioshock, there is not a single instance you are not in first person, you are always playing from the perspective of your character’s eyes. There are many moments where you can move around a bit, but ultimately you won’t be going anywhere until the “cutscene” is done. For example at the beginning of the game you take a device that transports you into the heart of the city of Rapture. You are stuck in this capsule the whole time but dramatic music and narration cues as the sights of the city come up in front of your view. It shows all the signs of being a cutscene while you’re looking at it from a first person view. 

Bioshock's reveal of rapture

Bioshock consistently shows these cutscenes throughout the game, particularly in one instance where you...

(SPOILER)
Kill Andrew Ryan, the game’s antagonist in first person, not able to control whether or not you smash him with a golf club.
(END SPOILER)

This is a pretty solid representation of a first person cutscene. It’s going to happen, you can’t stop it but you can feel and see what’s happening first hand. You can even feel the vibrations in your controller from the blows being struck. Dramatic music cues enhancing the scene, makes it immersive and rather different than a third person cutscene. Done right these can be even more effective than their third person counter parts.

Another example of this is in Call of Duty 4. You’re trying to escape in a Helicopter from a killzone after rescuing a pilot because there is a potential nuclear threat. You think you’re safe until suddenly the nuke goes off and your helicopter is caught in the blast. You wake up moments later, dazed and the scene is a sickly red hue. Everyone around you is already dead and your controller vibrates as you feel your character struggling to move. The effects of blurring, shaky camera, and struggling breaths help accentuate the feeling of appending doom. You can walk around, try to make it out of the broken down helicopter but the conclusion is always going to be the same. You walk out, but no matter how far you get you collapse and look to the sky, dying.



The Experience

It was a dramatic moment that players didn’t expect when they first played the game, and it was pretty powerful at the time. The fact that you played and viewed this in first person the whole time made it definitely very dramatic and even better than it could have been done in third person. Why? Because well we hardly knew this character to be honest, so if he died in third person who would really care? But the fact that it was done in first person made it a lot more impactful, because it was like you were embodying the character as well and you were dying. Not to mention the various effects that helped enhance the scene (The blurring and staggered walking would not have worked as well in third person). This is a very solid example of how well cinematics in a first person can work, done in a way that a third person cinematic would not be able to replicate.

Call of Duty 4's immersive ending

Camera work and Programming

In terms of programming, this can be rather difficult to get done right. You need to have certain camera work still, shaking the camera to replicate the first person view. You need to make sure all the objects are at a certain perspective and position so that it will make the most out of the view. If these are out of place it can make the view just look bad and not be very powerful. If the camera is not shaky enough to represent both the viewer and the effects on screen (Such as massive explosion), the viewer won’t feel like they are experiencing it and it will just look bad. Getting the camera movements right in this mode is essential for any game striving for first person cutscenes. If they are done wrong it completely ruins the experience because in my experience I have not been able to skip them. And you should not be able to as well since they are first person, but people could be rolling their eyes if they are simply really bad and non immersive experiences. It definitely has more risk than third person cutscenes, which is probably why there are way more games that just focus on trying to use third person cutscenes.

Conclusion

Not one method is better than the other of course, but overall there is less risk in using third person methods. Because it allows players to simply skip it and can still use lower quality camera work to function properly, it can get players right back into the action. Or they could put a lot of resources and make every cutscene a great cinematic experience to draw you in the story, you have either option really and you can put as much coding effort as you want or don’t want. Also to note is that any game can use third person methods as well, even if it’s a first person game. You can use these any times.

First person is more limited because you typically only want to use it in a first person game. There can be times when it can be executed incorrectly and just be boring, but in general it’s not that hard to make it not that boring. More and more games seem to be using it, though only the best can truly immerse the player in the experience. Coding can be a finicky issue because some games, such as Half Life 2 allow the player to move around fully during important events where NPCs are talking. This requires coding for the NPCs talking and lip synching but no camera involvement. This make it easy to code for in that sense but does this really count as a first person cutscene? Some might argue it does since it’s still a cinematic experience, while others might say it’s less engaging. In any case, coding can take the same role as third person, because there can be those moments like I just mentioned in half life where it can be easy to code, while there could be more complex coding in first person sequences such as a cinematic view during Bioshock.

Weighing in overall experiences involving watching and experiencing cutscenes, overall I have to say that coding in third person cutscenes can be harder. The way they are both coded are different and provide different experiences, but overall you have more flexibility in making interesting cinematic shots in third person, while first person can really immerse you if done correctly. But you are of course limited to that first person perspective, which should give you less potential options for complex cinematic shots. But I am not going to say one is better than the other. They both provide great experiences when done correctly and they both provide different experiences. First person is more unique because it’s used less and can pretty much only be used in first person games (with some exceptions I can’t think of currently but I know they are out there). Third person can provide great cinematic experiences though they are used much more commonly. They are both necessary for our games and I’ve enjoyed a variety of both types of cinematics.