Subscribe to GameFromScratch on YouTube Support GameFromScratch on Patreon
19. June 2015

 

Now we move on to a topic that people always seem to love, graphics!  In the past few chapters/videos I’ve said over and over “don’t worry, we will cover this later”, well… welcome to later. We are primarily going to focus on loading and displaying textures using a SpriteBatch.  As you will quickly discover, this is a more complex subject than it sounds.

 

As always, there is an HD video of the content available here

Before we can proceed too far we need a texture to draw.  A texture can generally be thought of as a 2D image stored in memory.  The source image of a texture can be in bmp, dds, dib, hdr, jpg, pfm, png, ppm or tga formats.  In the “real world” that generally means bmp, jpg or png formats and there is something to be aware of right away.  Of those three formats, only png and some jpgs have an alpha channel, meaning it supports transparency out of the box.  There are however ways to represent transparency in these other formats, as we will see shortly.  If you’ve got no idea which format to pick, or why, pick png.

 

 

Using the Content Pipeline

If you’ve been reading since the beginning you’ve already seen a bit of the content pipeline, but now we are going to actually see it in action with a real world example.  

Do we have to use the content pipeline for images?


I should make it clear, you can load images that haven’t been converted into xnb format. As of XNA 4, a simpler image loading api was added that allowed you to load gif, jpg and png files directly with the ability to crop, scale and save. The content pipeline does a lot for you though, including massaging your texture into a platform friendly format, potentially compressing your image, generation of mip maps or power of two textures, pre-multiplied alpha (explained shortly ), optimized loading and more. MonoGame included a number of methods for directly loading content to make up for it’s lack of a working cross platform pipeline. With the release of the content pipeline tool, these methods are deprecated. Simply put, for game assets ( aka, not screen shots, dynamic images, etc ), you should use the content pipeline.

Create a new project, then in the Contents folder, double click the file Content.mgcb.

image

 

This will open the MonoGame Content Pipeline tool.  Let’s add our texture file, simple select Edit->Add->Existing Item...

image

Navigate to a select a compatible image file.  When prompted chose the mode that makes the most sense.  I want the original to be untouched, so I am choosing Copy the file to the directory.

image

 

Your content project should now look like:

image

The default import settings for our image are fine, but we need to set the Content build platform.  Select Content in the dialog pictured above, then under Platform select the platform you need to build for.

image

Note the two options for Windows, Windows and WindowsGL.  The Windows platform uses a DirectX backend for rendering, while WindowsGL uses OpenGL.  This does have an effect on how content is processed so the difference is important. 

Now select Build->Build, saving when prompted:

image

 

You should get a message that your content was built.

image

We are now finished importing, return to your IDE.

Important Platform Specific Information


One Windows the .mgcb file is all that you need. When the IDE encounters it, it will basically treat it as a symlink and instead refer to the contents it contains. Currently when building on MacOS using Xamarin, you have to manually copy the generated XNB contents into your project and set their build type as Content. The generated files are located in the Output Folder as configured in the Content Pipeline. I have been notified that a fix for this is currently underway, so hopefully the Mac and Windows development experience will be identical soon.
 
Alright, we now have an image to work with, let’s jump into some code.
 
 
 

Loading and displaying a Texture2D

So now we are going to load the texture we just added to the content project, and display it on screen.  Let’s just jump straight into the code.

using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Input;

namespace Example1
{
    public class Game1 : Game
    {
        GraphicsDeviceManager graphics;
        SpriteBatch spriteBatch;
        Texture2D texture;

        public Game1()
        {
            graphics = new GraphicsDeviceManager(this);
            Content.RootDirectory = "Content";
        }

        protected override void Initialize()
        {
            base.Initialize();
        }

        protected override void LoadContent()
        {
            spriteBatch = new SpriteBatch(GraphicsDevice);
            texture = this.Content.Load<Texture2D>("logo");
        }

        protected override void UnloadContent()
        {
            //texture.Dispose(); <-- Only directly loaded
            Content.Unload();
        }

        protected override void Update(GameTime gameTime)
        {
            if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.
                Pressed || Keyboard.GetState().IsKeyDown(Keys.Escape))
                Exit();
            base.Update(gameTime);
        }

        protected override void Draw(GameTime gameTime)
        {
            GraphicsDevice.Clear(Color.CornflowerBlue);

            spriteBatch.Begin();
            spriteBatch.Draw(texture,Vector2.Zero);
            spriteBatch.End();

            base.Draw(gameTime);
        }
    }
}
 
When we run this code we see:
 
image
 
 
Obviously your image will vary from mine, but our texture is drawn on screen at the position (0,0).
 
There are a few key things to notice here.  First we added a Texture2D to our class, which is essentially the in memory container for our texture image.  In LoadContent() we then load our image into our texture using the call:
 
texture = this.Content.Load<Texture2D>("logo");
 
You notice we use our Game's Content member here.  This is an instance of Microsoft.Xna.Framework.ContentManager and it is ultimately responsible for loading binary assets from the content pipeline.  The primary method is the Load() generic method which takes a single parameter, the name of the asset to load minus the extension.  Notice the bold there?  That’s because this is a very common tripping point.  In addition to Texture2D, Load() supports the following types:
  • Effect
  • Model
  • SpriteFont
  • Texture
  • Texture2D
  • TextureCube

It is possible to extend the processor to support additional types, but it is beyond the scope of what we are covering here today.

Next we get to the UnloadContent method, where we simply call Content.Unload();  The ContentManager “owns” all of the content it loads, so this cleans up all of the memory for all of the objects loaded through the ContentManager.  Notice I left a commented out example calling Dispose().  It’s important to know if you load a texture outside of the ContentManager or create one dynamically, it’s is your responsibility to dispose of it or you may leak memory.  You may say, hey, this will all get cleaned up on program exit anyways.  Honestly this isn’t technically wrong, although cleaning up after yourself is certainly a good habit to get into. 

 

Memory Leaks in C#?


Many new to C# developers think because it's managed you can't leak memory. This simply isn't true. While compared to languages like C++, memory management is much simpler in C#, it is still quite possible to have memory leaks. In C# the easiest way is to not Dispose() of classes that implement IDisposable. An object that implements IDisposable owns an unmanaged resource (such as a Texture) and that memory will be leaked if someone doesn't call the Dispose() method. Wrapping the allocation in a using statement will result in Dispose() being called at the end of scope. As a point of trivia, other common C# memory leaks are caused by not removing event listeners and of course, calling leaky native code (pInvoke).
 
Now that we have our texture loaded, its time to display it on screen.  This is done with the following code:
    spriteBatch.Begin();
    spriteBatch.Draw(texture,Vector2.Zero);
    spriteBatch.End();

I will explain the SpriteBatch in a few moments, so let’s instead focus on the Draw() call.  This needs to be called within a Begin()/End() pair.  Let’s just say SpriteBatch.Draw() has A LOT of overloads, that we will look at now.  In this example we simply Draw the passed in texture at the passed in position (0,0).  Next let’s look at a few of the options we have when calling Draw().

Where is 0,0?


Different libraries, frameworks and engines have different coordinate systems. In XNA, like most windowing or UI libraries, the position (0,0) refers to the top left corner of the screen. For sprites, (0,0) refers to the top left corner as well, although this can be changed in code. In many OpenGL based game engines, (0,0) is located at the bottom left corner of the screen. This distinction becomes especially important when you start working with 3rd party libraries like Box2D, which may have a different coordinate system. Using a top left origin system has advantages when dealing with UI, as your existing OS mouse and pixel coordinates are the same as your game's. However the OpenGL approach is more consistent with mathematics, where positive X and Y coordinate values refer to the top right quadrant on a Cartesian plane. Both are valid options, work equally well, just require some brain power to convert between.

 

Translation and Scaling

spriteBatch.Draw(texture, destinationRectangle: new Rectangle(50, 50, 300, 300));
 
This will draw our sprite at the position (50,50) and scaled to a width of 300 and a height of 300.

image

 

Rotated

spriteBatch.Draw(texture, 
    destinationRectangle: new Rectangle(50, 50, 300, 300),
    rotation:-45f
    );

This will rotate the image –45degrees about it’s origin.

image

 

Notice that the rotation was performed relative to the top left corner of the texture.  Quite commonly when rotating and scaling you would rather do it about the sprites mid point.  This is where the origin value comes in.

 

Rotated about the Origin

spriteBatch.Draw(texture,
    destinationRectangle: new Rectangle(150 + 50,150 + 50, 300, 300),
    origin:new Vector2(texture.Width/2,texture.Height/2),
    rotation:-45f
    );

Ok, this one may require a bit of explanation.  The origin is now the midpoint of our texture, however we are going to be translating and scaling relative to our midpoint as well, not the top left.  This means the coordinates passed into our Rectangle need to take this into account if we wish to remained centered.  Also you need to keep in mind that you are resizing the texture as part of the draw call.  This code results in:

image

 

For a bit of clarity, if we hadn’t translated(moved) the above, instead used this code:

spriteBatch.Draw(texture,
    destinationRectangle: new Rectangle(0, 0, 300, 300),
    origin:new Vector2(texture.Width/2,texture.Height/2),
    rotation:-45f
    );
 
We would rotate centered to our sprite, but at the origin of our screen

image

 

So it’s important to consider how the various parameters passed to draw interact with each other!

 

Tinted

spriteBatch.Begin();
spriteBatch.Draw(texture, 
    destinationRectangle: new Rectangle(50, 50, 300, 300),
    color:Color.Red);
spriteBatch.End();
 
image
 
The Color passed in ( in this case Red ) was then added to every pixel in the texture. Notice how it only effects the texture, the Cornflower Blue background is unaffected.  The additive nature of adding red to blue resulted in a black-ish colour, while white pixels simply became red.
 
 
 

Flippped

spriteBatch.Draw(texture, 
    destinationRectangle: new Rectangle(50, 50, 300, 300),
    effects:SpriteEffects.FlipHorizontally|SpriteEffects.FlipVertically
    );

That's about it for draw, now let’s look a bit closer at SpriteBatch.

 

SpriteBatch

 

In order to understand exactly what SpriteBatch does, it’s important to understand how XNA does 2D.  At the end of the day, with modern GPUs, 2D game renderers no longer really exist.  Instead the renderer is actually still working in 3D and faking 2D.  This is done by using an orthographic camera ( explained later, don’t worry ) and drawing to a texture that is plastered on a 2D quad that is parallel to the camera.  SpriteBatch however takes care of this process for you, making it feel like you are still working in 2 dimensions. 

That isn’t it however, SpriteBatch is also a key optimization trick.  Consider if your scene consisted of hundreds of small block shape sprites each consisting of a small 32x32 texture, plus all of the active characters in your scene, each with their own texture being drawn to the screen.  This would result in hundreds or thousands of Direct3D or OpenGL draw calls, which would really hurt performance.  This is where the “Batch” part of sprite batch comes in.  In it’s default operating mode ( deferred ), a simply queues up all of the drawing calls, they aren’t executed until End() is called.  It then tries to “batch” them all together into a single draw call, thus rendering as fast as possible.

There are settings attached to a SpriteBatch called, specified in the Begin() that we will see shortly.  These are the same for every single Draw call within the batch.  Additionally you should try to keep every single Draw call within the batch in the same texture, or within as few different textures as possible.  Each different texture within a batch incurs a performance penalty.  You can also call multiple Begin()/End() pairs in a single render pass, just be aware that the Begin() process is rather expensive and this can quickly hurt performance if you do it too many times.  Don’t worry though, there are ways to easily organize multiple sprites within a single texture.  If by chance you actually want to perform each Draw call as it occurs you can instead run the sprite batch in immediate mode, although since XNA 4 (which MonoGame is based on), there is little reason to use Immediate mode, and the performance penalty is harsh.

One other major function of the SpriteBatch is handling blending, which is how overlapping sprites interact.

 

Sprite Blending

Up until now we’ve used a single sprite with no transparency, so that’s been relatively simple.  Let’s instead look at an example that isn’t entirely opaque.

Let’s go ahead an add a transparent sprite to our content project.  Myself I am going to use this one:

transparentSprite

… I’m sorry, I simply couldn’t resist the pun.  The key part is that your sprite supports transparency, so if you draw it over itself you should see:

transparentSpriteOverlay

 

Now let’s change our code to draw two sprites in XNA.

using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Input;

namespace Example2
{
    public class Game1 : Game
    {
        GraphicsDeviceManager graphics;
        SpriteBatch spriteBatch;
        Texture2D texture;

        public Game1()
        {
            graphics = new GraphicsDeviceManager(this);
            graphics.PreferredBackBufferWidth = 400;
            graphics.PreferredBackBufferHeight = 400;
            Content.RootDirectory = "Content";
        }

        protected override void LoadContent()
        {
            spriteBatch = new SpriteBatch(GraphicsDevice);
            texture = this.Content.Load<Texture2D>("transparentSprite");
        }

        protected override void Update(GameTime gameTime)
        {
            if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.
                Pressed || Keyboard.GetState().IsKeyDown(Keys.Escape))
                Exit();
            base.Update(gameTime);
        }

        protected override void Draw(GameTime gameTime)
        {
            GraphicsDevice.Clear(Color.CornflowerBlue);

            spriteBatch.Begin();
            spriteBatch.Draw(texture, Vector2.Zero);
            spriteBatch.Draw(texture, new Vector2(100,0));
            spriteBatch.End();

            base.Draw(gameTime);
        }
    }
}
 
... and run:
image
Pretty cool.

 

This example worked right out of the box for a couple reasons.  First, our sprite was transparent and identical, so draw order didn’t matter.  Also when we ran the content pipeline, the default importer ( and the default sprite batch blend mode ) is transparency friendly.

image

This setting creates a special transparency channel for your image upon import, which is used by the SpriteBatch when calculating transparency between images.

 

Let’s look at a less trivial example, with a transparent and opaque image instead.

using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Input;

namespace Example2
{
    public class Game1 : Game
    {
        GraphicsDeviceManager graphics;
        SpriteBatch spriteBatch;
        Texture2D texture;
        Texture2D texture2;

        public Game1()
        {
            graphics = new GraphicsDeviceManager(this);
            graphics.PreferredBackBufferWidth = 400;
            graphics.PreferredBackBufferHeight = 400;
            Content.RootDirectory = "Content";
        }

        protected override void LoadContent()
        {
            spriteBatch = new SpriteBatch(GraphicsDevice);
            texture = this.Content.Load<Texture2D>("logo");
            texture2 = this.Content.Load<Texture2D>("transparentSprite");
        }

        protected override void Update(GameTime gameTime)
        {
            if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.
                Pressed || Keyboard.GetState().IsKeyDown(Keys.Escape))
                Exit();
            base.Update(gameTime);
        }

        protected override void Draw(GameTime gameTime)
        {
            GraphicsDevice.Clear(Color.CornflowerBlue);

            spriteBatch.Begin();
            spriteBatch.Draw(texture, Vector2.Zero);
            spriteBatch.Draw(texture2, Vector2.Zero);
            spriteBatch.End();

            base.Draw(gameTime);
        }
    }
}
 
When run:

image

So far, so good.  Now let’s mix up the draw order a bit…

spriteBatch.Begin();
spriteBatch.Draw(texture2, Vector2.Zero);
spriteBatch.Draw(texture, Vector2.Zero);
spriteBatch.End();

… and run:

image

Oh…

As you can see, the order we make Draw calls is by default, the order the sprites are drawn.  As in the second Draw() call will draw over the results of the first Draw() call and so on.

 

There is a way to explicitly set the drawing order:

spriteBatch.Begin(sortMode: SpriteSortMode.FrontToBack);
spriteBatch.Draw(texture2, Vector2.Zero, layerDepth:1.0f);
spriteBatch.Draw(texture, Vector2.Zero, layerDepth:0.0f);
spriteBatch.End();

 

Here you are setting the SpriteBatch sort order to be front front to back, then manually setting the draw layer in each draw call.  If you are guessing, there is also a BackToFront setting.  SpriteSortMode is also what determines if drawing is immediate ( SpriteSortMode.Immediate ) or deferred ( SpriteSortMode.Differed ). 

 

Blend States

 

We mentioned earlier that textures imported using the Content Pipeline by default has a special pre-calculated transparency channel created.  This corresponds with SpriteBatches default BlendState, AlphaBlend.  This uses the magic value created by the pipeline to determine how overlapping transparent sprites are renderer.  If you don’t have a really good reason otherwise, and are using the Content Pipeline to import your textures, you should stick to the default.  I should point out, this behavior only became the default in XNA4, so older tutorials may have much different behavior.

 

The old default used to be interpolative blending, which used the RGBA values of the texture to determine transparency.  This could lead to some strange rendering artifacts ( discussed here: https://en.wikipedia.org/wiki/Alpha_compositing ).  The advantage is, all you need to blend images is an alpha channel, there was no requirement to create a special pre-multiplied channel.  This means you didn’t have to run these images through the content pipeline.  If you wish to do things the “old” way, when importing your assets ( if not simply loaded directly from file ) select false for PreMultiplied alpha in the Texture Importer Processor settings of the Content Pipeline.  Then in your SpriteBatch, do the following:

spriteBatch.Begin(blendState:BlendState.NonPremultiplied);
 
There are additional BlendState options including Additive ( colors are simply added together ) and Opaque ( subsequent draw calls simply overwrite the earlier calls ).  You can have a great deal of control over the BlendState, but most projects simply will not require it.  One other thing I ignored is Chromekeying.  This is another option for supporting transparency basically you dedicate a single color to be transparent, then specify that color in the Content Pipeline.  Essentially you are forming a 1bit alpha channel and are essentially “green screening” like in movies.  Obviously you cannot use the color in your image however.  In exchange for ugly source sprites and extra labor, you save in file size as you don’t need to encode the alpha channel.
 
 
There is some additional functionality built into SpriteBatch, including texture sampling, stencil buffers, matrix transforms and even special effects.  These are well beyond the basics though, so we will have to cover them at a later stage.
 
 

The Video

 

Programming


16. June 2015

 

Now we are going to talk about two important concepts in AI development in 2D games, path following and navigation meshes.  Path following is exactly what you think it is, you create paths and follow them.  This is useful for creating predefined paths in your game.  When you are looking for a bit more dynamic path finding for your characters, Navigation Mesh ( or NavMesh ) come to the rescue.  A NavMesh is simply a polygon mesh that defines where a character can and cannot travel.

 

As always there is an HD video of this tutorial available here.

Let’s start with simple path following.  For both of these examples, we are going to want a simple level to navigate.  I am going to create one simply using a single sprite background that may look somewhat familiar…

image

 

So, we have a game canvas to work with, let’s get a character sprite to follow a predefined path.

 

Path2D and PathFollow2D

 

First we need to start off by creating and defining a path to follow.  Create a new Path2D node:

image

 

This will add additional editing tools to the 2D view:

image

 

Click the Add Point button and start drawing your path, like so:

image

 

Now add a PathFollow2D node, and a Sprite attached to that node, like so:

image

 

There are the following properties on the PathFollow2D node:

image

 

You may find that you start rotated for some reason.  The primary setting of concern though is the Offset property.  This is the distance along the path to travel, we will see it in action shortly.  The Loop value is also important, as this will cause the path to go back to offset 0 once it reaches the end and start the travel all over again.  Finally I clicked Rotate off, as I don’t want the sprite to rotate as it follows the path.

 

Now, create and add a script to your player sprite, like so:

extends Sprite


func _ready():
   set_fixed_process(true)

func _fixed_process(delta):
   get_parent().set_offset(get_parent().get_offset() + (50*delta))

 

This code simply gets the sprites parent ( the PathFollow2D Node ), and increments it’s offset by 50 pixels per second.  You can see the results below:

PathFollow

 

You could have course have controlled the offset value using keyframes and an animation player as described in the previous chapter.

 

So that’s how you can define movement across a predefined path… what about doing things a bit more dynamic?

 

Navigation2D and NavigationPolygon

 

Now let’s create a slightly different node hierarchy.  This time we need to create a Navigation2D Node, either as the root, or attached to the root of the scene.  I just made it the root node.  I also loaded in our level background sprite.  FYI, the sprite doesn’t have to be parented to the Navigation2D node.

image

 

Now we need to add a Nav Mesh to the scene, this is done by creating a NavigationPolygonInstance, as a child node of Navigation2D:

image

 

This changes the menu available in the 2D view again, now we can start drawing the NavMesh.  Start by outlining the entire level.  Keep in mind, the nav mesh is where the character can walk, not where they can’t, so make the outer bounds of your initial polygon the same as the furthest extent the character can walk.  To start, click the Pen icon.  One first click you will be presented this dialog:

image

 

Click create.  Then define the boundary polygon, like so:

image

 

Now using the Pen button again, start defining polygons around the areas the character cant travel.  This will cut those spaces out of the navigation polygon.  After some time, I ended up with something like this:

image

 

So we now have a NavMesh, let’s put it to use.  Godot is now able to calculate the most efficient path between two locations.

For debug reasons, I quickly import a TTF font, you can read this process in Chapter 5 on UI, Widgets and Themes.  Next attach a script to your Navigation2D node.  Then enter the following code:

extends Navigation2D
var path = []
var font = null
var drawTouch = false
var touchPos = Vector2(0,0)
var closestPos = Vector2(0,0)

func _ready():
   font = load("res://arial.fnt")
   set_process_input(true)

func _draw():
   if(path.size()):
      for i in range(path.size()):
         draw_string(font,Vector2(path[i].x,path[i].y - 20),str(i+1))
         draw_circle(path[i],10,Color(1,1,1))
      
      if(drawTouch):
         draw_circle(touchPos,10,Color(0,1,0))  
         draw_circle(closestPos,10,Color(0,1,0))
   

func _input(event):
   if(event.type == InputEvent.MOUSE_BUTTON):
      if(event.button_index == 1):
         if(path.size()):
            touchPos = Vector2(event.x,event.y)
            drawTouch = true
            closestPos = get_closest_point(touchPos)
            print("Drawing touch")
            update()
            
      if(event.button_index == 2):
         path = get_simple_path(get_node("Sprite").get_pos(),Vector2(
                event.x,event.y))
         update()

 

This code has two tasks.  First when the user clicks right, it calculates the closest path between the character sprite and the clicked location.  This is done using the critical function get_simple_path() which returns a Vector2Array of points between the two locations.  Once you’ve calculated at least one path ( the path array needs to be populated ), left clicking outside of the navmesh will then show two circles, one where you clicked, the other representing the closest navigable location, as returned by get_closest_point().

 

Here is our code in action:

PacNav

As you right click, a new path is established drawn in white dots.  Then left clicking marks the location of the click and the nearest walk-able location in the nav polygon.  You may notice the first left click resulted in it drawing a location to the left of the screen.  This is because my navmesh wasn’t water tight, lets look:

image

 

Although miniscule in size, this small spot of polygons is a valid path to the computer.  When setting your nav mesh’s up, be sure to make sure you don’t leave gaps like this!

 

There are a couple things you might notice.  The path returned is the minimum direct navigable line between two points.  It however does not take into account the size of the item you want to move.   This is logic that you need to provide yourself.  In the example of something like PacMan, you are probably better off using a cell based navigation system, based on an algorithm like a*star.  I really wish get_closest_path() allowed you to specify the radius of your sprites bounding circle to determine if the path is actually large enough to travel.  As it stands now, you are going to have to make areas that are too small for your sprite as completely filled.  This renders Navigation2D of little use to nodes of varying sizes.

 

Regardless of the limations, Navigation2D and Path2D provide a great template for 2D based AI development.

 

The Video

Programming


7. June 2015

 

In the previous tutorial we covered Sprite Animation, although to be honest it was more about creating animation ready sprites.  The actual way we performed animation wasn’t ideal.  Fortunately we are about to cover a way that is very much ideal… and capable of a great deal more than just animating sprites!

 

As always, there is an HD video of the this tutorial available right here or embedded below.  It’s important to have followed the previous tutorial, as we will be building directly on top of it.

 

Keyframes Explained

 

Before we get to far into this tutorial I think it’s pretty critical to cover a key concept in animation, keyframing.  Essentially you animate by setting a number of “key” frames in the animations timeline, then let the computer take care of the rest.  You can set a keyframe on just about any property available in Godot as we will soon see.  For example you can create a key on the position value of a sprite.  Then advance the timeline, set another key at a different position.  The computer will then interpolate the position over time between those two keys.  This interpolation between key frames is often referred to as a “tweening” as in “inbetween”.  Don’t worry, it will make a lot more sense when we jump in shortly.

 

AnimationPlayer

 

In the previous tutorial, we created a simple animation using code to increment the current frame at a fixed play rate.  Now we are going to accomplish the same thing using the built in animation system in Godot. 

Start by opening up the previous project and remove the code from our AnimatedSprite.  Now add a AnimationPlayer node under the root of your scene, like so:

image

 

With the AnimationPlayer selected, you will notice a new editor across the bottom of the 2D window:

image

 

This is your animation timeline control.  Let’s create a new animation named “walkcycle”

Click the New Animation icon

image

 

Name your animation and click Ok

image

 

Click the Edit icon

image

 

This will extend the animation options even more.

image

 

First let’s set the duration of our animation to 2 seconds:

image

 

You can then adjust the resolution of the animation timeline using the Zoom slider:

image

 

Here we’ve zoomed in slightly to show just over 2 seconds:

image

 

Now that we are in edit mode with our AnimationPlayer selected, in the 2D view, you will notice there are new options available across the top

image

 

This is a quick way to set keys for a nodes positioning information.  You can toggle on or off whether the key will store location, rotation and/or scale data.  You set a key by pressing the Key icon.  The first time you press the key icon you will be prompted if you want to create a new track of animation.

Select your sprite, make sure the timeline is at 0 and create a key. Advance the timeline to 2secconds, then move the sprite to the right slighly, then hit the key icon again to create another key frame.

g1

 

Press the play icon in the AnimationPlayer to see the animation you just created:

g2

 

Well that’s certainly movement, but pretty crap as far as animations go eh?  How about we add some frame animation to our mix.  Here is where you start to see the power of animation in Godot.

 

With the AnimationPlayer selected, rewind the timeline back to zero, make sure you select your AnimatedSprite, then in the Details panel you will notice that all of the properties have a little key beside them:

image

 

This is because you can keyframe just about any value in Godot.  We are now going to do it with the Frame value.  This is the value we programmatically increased to create our animation before.  Now we will instead do it using keyframes.  With the timeline at 0, set Frame to 0 then click the Key icon to it’s right.  Click Create when it prompts you if you wish to create a new track.  Then move the timeline to the end, increase Frame to the final frame (21 in my case), then click the Key again.  This will create a new track of animation:

image

 

You can see the different track names across the left.  The blue dots represent each key frame in the animation.  There is one final change we have to make.  Drop down the icon to the right of the animation track and change the type to Continuous, like so:

image

 

Now when you press play, you should see:

g3

 

Playing Your Animation

 

While your animation appears properly if you press Play in the AnimationPlayer interface, if you press Play on your game, nothing happens.  Why is this?

 

Well simply put, you need to start your animation.  There are two approaches to starting an animation.

 

Play Animation Automatically

You can set the animation to play automatically.  This means when the animation player is created it will automatically start the selected animation.  You can toggle if an animation will play automatically using this icon in the Animation Player controls.

image

 

Play an Animation Using Code

Of course, AnimationPlayer also has a programmatic interface.  The following code can be used from the scene root to play an animation:

extends Node

func _ready():
   get_node("AnimationPlayer").play("walkcycle")

 

Scripting the AnimationPlayer

 

Say you want to add a bit of logic to your keyframed animation…  with AnimationPlayer you have a couple options we can explore.

 

First there are the events that are built into the AnimationPlayer itself:

image

For simple actions like running a script when an animation changes or ends, using AnimationPlayer connections should be more than enough.

 

What about if you wanted to execute some code as part of your animation sequence?  Well that is possible too.  In your Animation editor window, click the Tracks button to add a new animation track:

image

 

Select Add Call Func Track:

image

 

Another track will appear in your animation.  Click the green + to add a new keyframe.

image

 

Now left click and drag the new key to about the halfway (1 second) mark.  Switch to edit mode by clicking the pen over a dot icon, then click your keyframe to edit it.  In the name field enter halfway.  This is the name of the method we are going to call:

g4

 

Add a method to your root scene named halfway:

extends Node

func _ready():
   get_node("AnimationPlayer").play("walkcycle")

func halfway():
   print("Halfway there")

 

When the keyframe is hit, the halfway method will be called.  Adding more function calls is as simple as adding more keys, or complete Call Func tracks.  As you may have noticed in the video above, you have the ability to pass parameters to the called function:

image

 

Finally, you can also access animations, tracks and even keys directly using code.  The following example changes the destination of our pos track.  This script was attached to the AnimationPlayer object:

extends AnimationPlayer


func _ready():
   var animation = self.get_animation("walkcycle")
   animation.track_set_key_value(animation.find_track("AnimatedSprite:transform/pos"),1,Vector2(400,400))

 

Now when you run the code you should see:

g5

 

The Video

Programming


3. June 2015

 

In this tutorial we are going to look at Sprite Animation in Godot Engine, specifically on using the AnimatedSprite class.  We are going to import and create a node that has multiple frames of animation, then look at some code to flip between frames.  In the immediately following tutorial, we will then cover a much better animation method using AnimationPlayer.

 

As always, there is an HD Video version of this tutorial available right here or embedded below.

 

Alright, let’s just right in with AnimatedSprite.

 

Sprite Animation

AnimatedSprite is a handy Node2D derived class that enables you to have a node with multiple SpriteFrames.  In plain English, this class enables us to have a sprite with multiple frames of animation. 

 

Speaking of frames of animation, this is the sequence of png images I am going to use for this example:

image

 

You can download the zip file containing these images here, or of course you can use whatever images you want.

 

Now we simply want to import them to our using the standard Import->2D Texture method.  Be aware, you can multi select in the Importer, so you can import the entire sequence in one go.  Assuming you’ve done it right, your FileSystem should look somewhat like:

image

 

Now add an AnimatedSprite node to your scene like so:

image

 

Now we add the frames to our AnimatedSprite by selecting Frame->New SpriteFrames

image

 

Now drop it down again and select Edit:

image

 

The 2D editor will now be replaced with the SpriteFrames editor.  Click the open icon:

image

 

Shift select all of the sprite frames and select OK

image

 

All of your sprites should now appear in the editor:

image

 

Now let’s add some code to flip through the frames of our AnimatedSprite.  Attach a script to the AnimatedSprite node, then use the following code:


extends AnimatedSprite

var tempElapsed = 0

func _ready():
   set_process(true)
   
func _process(delta):
   tempElapsed = tempElapsed + delta
   
   if(tempElapsed > 0.1):
      if(get_frame() == self.get_sprite_frames().get_frame_count()-1):
         set_frame(0)
      else:
         self.set_frame(get_frame() + 1)
      
      tempElapsed = 0
   
   print(str(get_frame() + 1))

The logic is pretty simple.  In our process tick we increment a variable tempElapsed, until 1/10th of a second has elapsed, at which point we move on to the next frame.  If we are at the last frame of our available animation, we then go back to the very first frame.

 

When you run it, you should see:

walking

 

Pretty cool!  However, instead of advancing the frame using code there is a much better approach to animation, that we will see in the next tutorial.  Stay tuned.

 

The Video

Programming


1. June 2015

 

Today we are going to look at creating 2D maps composed of tiles.  You can think of tiles as re-usable lego block like sprites that are assembled to create a more detailed map.  Tiles are stored in a data structure called a tileset, where collision details can be added.  These tiles and tile sets are then used to “paint” 2D scenes in something called a tilemap.  A tile map itself can contain multiple layers of tiles stacked on top of each other.  Don’t worry, it will make sense once we jump in.

 

WARNING!


When I wrote this tutorial, the majority of functionality I cover is currently under very active development. In order to follow along with this tutorial you need to have version 4.8 installed. Currently 4.8 is in preview release only, hopefully it will be released soon and I can remove this message. For now however, if you want to work with 2D tilemaps with collision data, you need to install the development release. For details on how to do this please read this post.

 

So, at this point I assume you either have the developer preview download, enough time has elapsed that this functionality is in the main release or you are simply reading on for future reference.  All the disclaimers out of the way, let’s jump in!

 

There is an HD video version of this tutorial available here: [Coming Soon].

 

Creating a Tileset

 

First start off by loading a sprite sheet texture in Unreal Engine, details of loading a sprite are available here.

For  this particular example, we need some tiles to work with.   Instead of creating my own spritesheets, I am going to use some of the free graphics that Kenney.nl makes available, specifically the Platform Pack.  Obviously you can use whatever image you wish, just be sure that the tiles are size the same and ideally that your image is a power of two in size. 

Import the spritesheet you are going to use for your tiles, in my case I selected Spritesheets/spritesheet_ground.png.  Make any changes you wish to the texture, such as disabling mipmaps and turning filtering to nearest.

Now right click your newly created texture and select Sprite Actions->Create Tileset:

image

 

This will then create a TileSet object, double click it to open the editor.

image

 

The TileSet editor should appear:

image

 

Across the left hand side are all of the tiles that are in your imported sprite.  Selecting one will make it visible in the top right window.  The bottom right window has properties for the entire texture set.  The most important to set right away is the Tile Size:

image

 

Here you enter the pixel dimensions of each individual tile within your image.  In the spritesheet from Kenney.nl, each tile is 128x128 in size.  The remaining settings are for tilesets that have gaps between tiles and aren't applicable in this case.  Both the left and top right window can be zoomed and panned using the regular commands.

 

Now let’s look at setting up collision shapes for a few tiles.  First select a tile from the left side, like so:

image

 

A white rectangle will bound around the selected tile.  It will now appear in the top right window:

image

 

We can now define bounding shapes using the toolbar:

image

 

In this case, a simple box is the easiest ( and least processing intensive.  So click Add Box:

image

 

This will now make it so the entire surface causes a collision.   For non-box shaped tiles, you are often going to want to use the Add Polygon option instead, then define the collision boundary accordingly, like so:

tile

 

Simply click for each vertices you wish to create.  Once done hit enter to finish your shape.  You can shift click to add new points to an existing shape.

Repeat this action for each tile that has collision data.  If a sprite is able to pass completely through the sprite without collision you don’t need to provide a collision shape at all.  Repeat this step for each tile in your set that can be collided with.

You can easily check which tiles you’ve defined a collision shape for by clicking Colliding Tiles:

image

 

When done click Save and we have just created our first TileSet.

 

Creating a TileMap

 

Now it’s time to create a Tilemap.   To create a tilemap select Add New –>Paper2D->Tile Map

image

 

This will create a new tile map object.  Double click it to bring up the tilemap editor.

image

 

Here is the tilemap editor in action:

image

 

On the left hand side is a selection of tiles you can paint with.  In the middle is the canvas you paint on, while on the right are your layer controls and the layer properties.  There are a couple critical things you need to configure right away.

 

First select your tileset.  On the left hand side, drop down the Active Tile Set dialog ( hit the grid icon ) and select the tile set we just created.

image

 

Now in the layer properties, we set the size of our tiles and the overall width and height of our layer ( in tiles ):

image

Start by selecting a base tile to fill the entire map with, select file mode and then click somewhere inside the map grid, like so:

 

Select base tile:

image

 

Choose Fill:

image

 

And click:

image

 

Now select an individual tile to paint with, click Paint, then draw it on the map, like so:

tile2

 

Quite often you are going to want tiles to appear “over” other tiles.  This can be accomplished using layers.  To add a layer simply click the Add New Layer button:

image

 

The order layers are drawn is the same as they are displayed:

image

You can use the up and down icons to change the layer order.  The layer selected ( the one highlighted ) is the layer that all drawing will occur on.

 

Adding your Tilemap to the Scene

 

Now that you’ve created your map, you can use it like you would any Sprite object.  Simply drag it into your scene:

tile3

 

The positioning of the tilemap is important, the Y value is going to determine what is drawn over or under when drawing the scene, just like with sprites.  In this case however, sometimes you want to position your sprite in front of the background, but behind a foreground layer, like so:

tile4

 

This is done using a property called Separation Per Layer in the Tilemap details.

 image

This is the Y coordinate ( confusingly called Z order in the tooltip ) of the layer within the game world.  For example if you position your tilemap at Y= –10 and set Separation Per Layer to 50, the first layer will be at Y=40, the second at Y=90, etc.  Therefore a sprite at 0 will draw in front of the bottom layer, but behind the top layer.

If you want to see a more detailed example showing collisions in action, be sure to watch the video version of this tutorial.

 

The Video

 

Coming soon

Programming


AppGameKit Studio

See More Tutorials on DevGa.me!

Month List