Subscribe to GameFromScratch on YouTube Support GameFromScratch on Patreon
12. July 2015

 

In today’s “A Closer Look At” guide we will be taking a look at the App Game Kit 2 game engine.  The Closer Look at Series is a combination preview, review and imagegetting started tutorial aimed at giving you a solid overview of what working in a particular game engine is like.  App Game Kit is a cross platform game engine capable of making games for Windows, Mac, iOS, Android and Blackberry devices using Mac or Windows development environments.  App Game Kit regularly costs $99, although if you are reading this near the publish date ( 7/12/15 ) it is currently available in the Humble Gamedev Bundle with several other tools.

 

AppGameKit is a cross platform, mobile friendly 2D/3D game engine, although 3D is a work in progress.  AGK is programmed primarily using AGK Script, which is a BASIC dialect with some C++ style features.  If the word BASIC caused you to recoil in terror from your screen, don’t worry, AGK is also available as a C++ library.  If the word C++ just recoiled from your screen, hey… there’s always AGK Script… :)  Over the course of this article we will take a quick look at both.  AppGameKit 2 was the result of a successful Kickstarter campaign and is a product of The Game Creators who previously developed Dark Basic and 3D Gamemaker.

 

There is also an HD video version of this article available here or embedded below.

 

Hello AGK

Let’s jump right in with a pair of Code samples.  First in AGK Script, then in C++.

 

AGK Script Example

// Project: test 
// Created: 2015-07-10

// Setup the main Window
SetWindowTitle( "My Game Is Awesome!" )
SetWindowSize( 640, 480, 0 )

// set display properties
SetVirtualResolution( 640, 480 )
SetOrientationAllowed( 1, 1, 1, 1 )

// Create a Cube 100x100x100 at the Origin
box = CreateObjectBox(100,100,100)
SetObjectPosition(box,0,0,0)

// Add a light to the scene
CreateLightDirectional(1,-1,-1,-0.5,255,255,255)

// Set the position of the camera and aim it back at the origin
SetCameraPosition(1,0,100,-150)
SetCameraLookAt(1,0,0,0,0)

rotationY# = 0

// Gameloop
do
   // Display the current FPS
   Print( ScreenFPS() )
   // Rotate our Object
    SetObjectRotation(box,0,rotationY#,0)
    // Display back buffer
    Sync()
    // Update Rotation
    rotationY# = rotationY# + 1
loop

 

Here is this code example running:

g1

 

C++ Example

// Includes
#include "template.h"

// Namespace
using namespace AGK;

app App;

void app::Begin(void)
{
   agk::SetVirtualResolution (640, 480);
   agk::SetClearColor( 0,0,0 ); 
   agk::SetSyncRate(60,0);
   agk::SetScissor(0,0,0,0);


   // Load a Sprite at index 1
   agk::LoadSprite(1,"logo.png");
   // Scale the sprite to 25% of it's size
   agk::SetSpriteScale(1, 0.25, 0.25);

   
}

void app::Loop(void)
{
   agk::Print(agk::ScreenFPS());

   // On gamepad press, spacebar hit, touch or left click enable 
   physics on our sprite
   // And make that sucker bouncy, gravity does the rest
   if (agk::GetButtonPressed(1) || agk::GetPointerPressed()){
      agk::SetSpritePhysicsOn(1, 2);
      agk::SetSpritePhysicsRestitution(1, 0.8);
   }

   agk::Sync();
}


void app::End (void)
{

}

 

Here is this code sample running:

g2

 

There is a bit more to the C++ example than the code shown here.  AGK is provided as a library for C++ use.  To get started there are also a number of templates that you can copy and start from.  Each template has a bootstrap class containing the required code to start AGK.  For example the Visual Studio project provides a WinMain implementation for you.  Here are the currently available templates for a Windows install (MacOS has XCode projects available for iOS and Mac):

image

 

Of course you can of course create your own application and simple use AGK as a C++ library if you prefer.

 

Tools Included

 

AGK IDE

If you are working with AGK Script, the majority of your development will be done in AGK IDE.  AGK IDE is a Code::Blocks derived IDE with full text editing, syntax highlighting, code completion and debugging support for AGK Script.

image

 

I experienced no lag when using the IDE.  Most of the features you would expect in a modern IDE are available, including code folding, find/replace, intellisense and code hints:

image

 

Refactoring tools are all but non-existent and unfortunately there is no local help.  (We will discuss help files shortly).

 

Debugging is supported:

image

But the implementation is barebones, limited to break points, step over/out and displaying the call stack.  You can inspect variable values, but you have to enter them manually in the Variables area.  There is no ability to inspect, add watch or set conditional breakpoints.  I believe this functionality is relatively new, so hopefully it will be extended with time.

 

If you choose to work with C++, you will not use AGK IDE at all, instead working in your C++ IDE of choice.  As mentioned earlier, several templates are provided.

 

Placement Editor

AGK also ships with the Placement Editor, a fairly simplistic tiled level creator:

image

 

It enables you to place varying sized tiles, perform transforms such as rotation and mirroring, layer up and down and supports grid snapping to grid for precise layouts.  The UI however leaves something to be desired, with the scroll bars only supporting movement via the arrows and panning controlled using an onscreen controller.  Additionally you have to exit the application and copy graphics to the media directory to add new tiles.  Unfortunately it’s also lacking some seriously required functionality such as defining collision volumes or physics properties.  It does however enable quick creation of tiled backgrounds.

AGK Player

One other cool feature of AppGameKit is the AGK Player, available on mobile devices:

ss

 

By pressing Broadcast in the IDE, you can easily see your code running on device without having to bundle/sign/deploy like normal.  It is available for download in the app store, except on iOS where you have to build and deploy it yourself using XCode.  This tool enables rapid on device testing during development and can be a huge time saver.

 

Samples and Demos

AppGameKit ships with an impressive number of AGK Script code samples:

image

Including a few complete games:

image

And some user provided examples:

image

 

Here is the SpaceShooter example in action:

g3

 

Documentation and Help

 

I mentioned earlier that there is no local documentation available, which is unfortunate.  There also doesn’t appear to be a generated class library reference, which is also unfortunate.   That said, the documentation available for AGK is actually rather solid and has been complete from my experiences.  It is available entirely online so you can check it out yourself.

 

The scripting language itself is quite simple and documented here in reference form.  While a more broad language introduction and overview is available here.  As mentioned earlier, there are also dozens of examples included to learn from.  There are a series of guides available covering specific topics such as networking , platform publishing and more.  The closest thing to a class reference is the Commands documentation, you can toggle between C++ and BASIC code examples:

image

 

AGK also has a very active developer forum available.

 

The Library

 

Of course we should talk about what functionality is available in AGK.  Over all it’s quite comprehensive, although the 3D functionality is very actively under development and some functionality is obviously missing or under development ( such as animated 3D model support ).

 

The follow feature list is taken directly from the AGK site.

  • Write once, deploy technology
  • Code in BASIC or native (C++)
  • Device independent
  • Cross Platform IDE
    • Develop on Windows or Mac and deploy to Windows / Mac / iOS / Android / Blackberry 10
    • Broadcast test apps to devices over Wifi
    • Auto Complete
    • Function Lists
    • Code Folding
    • Export to exe/app/apk/ipa
  • 2D Games Engine
    • Tweening
    • Blending modes
    • Spine Support
    • Box 2D Physics
    • Particles
    • Drawing commands
    • Text support - fixed and variable width
  • 3D Engine
    • Primitives
    • Positioning
    • Rotation
    • Shaders
    • Collision
    • Cameras
    • Lights
  • Audio/Visual
    • Video Playback
    • Sound
    • Music
  • Input Agnostic
    • Direct Input Control
    • Touch
    • Keyboard
    • Mouse
    • Accelerometer
    • Joystick/Controllers
  • Sensors
    • Camera Access *
    • GPS *
    • Geo-location *
    • Inclinometer *
    • Light Sensor
  • Mobile
    • Rating an app *
    • In-App Purchasing *
    • Adverts *
    • - Chartboost
    • - Admob
  • Networking
    • Messages
    • Shared Variables
  • Misc
    • File IO
    • Facebook *
    • File IO
    • Extensive help & tutorials
    • Time & Date
    • Enhanced image control
    • QR Codes
    • HTTP
    • Edit Boxes
    • Zip file control
  • Coming Soon...
    • Debugger
    • 3D Engine Enhancements
    • File IO
    • 3D Bullet Physics
    • Extension System

    * Selected platforms only.

 

Conclusion

 

I certainly haven’t spent enough time with AGK to consider this a full review by any means.  The following are my initial impressions when working with AGK.  I am personally not a huge fan of the BASIC language, even as a beginner recommendation, but AGK Script is accessible and the included tooling make AGK an appropriate choice for beginners, especially with the included examples and solid documentation.  I was able to use the library very much by intuition and the ability to code entirely in C++ will appeal to several developers.  The mapping between C++ and BASIC is very natural but this also comes at a cost.  The C++ side of the equation is very “flat” in structure, using no OOP and a disconcerting number of “magic number” type variables.  In a simple project, AGK is a solid and approachable choice.  In more complex projects, without writing an organizational layer over top, I could see quickly developing a mess of unmaintainable code.

 

The tools included with AGK are functional but could both use a bit of polish.  The editing experience in AGK script is fine but the debugger is fairly young and needs some work and the some more refactoring support would be nice.  Integrated and context sensitive help would also be a huge boon.  For the most part though there is enough functionality in the IDE that working in AGK Script wouldn’t feel like a chore.  The level editing tool is nice in that it exists, but the functionality is extremely limited and the UI isn’t the best.  For anything but the most trivial game, I would imagine you would find yourself wanting a more mature tool like Tiled, although a loader is available, so this is certainly an option.  Also, the editor is only available on Windows machines.

 

From a developer perspective, for a 2D game, AGK provides pretty much all of the functionality you would expect and some that is a bit unexpected, like cross platform video playback.  3D is coming along but missing some key functionality.  The code experience is incredible consistent, once you’ve figured out how to do one task, you can generally guess how other tasks are going to be performed.  You can accomplish a great deal in AGK in a very short period of time, but I do question how well the design would scale to larger projects.  As a C++ library, AGK could also be considered as a cross platform competitor to libraries such as SFML or SDL.  AGK does appear to be a good solution for new developers, especially if you like the BASIC programming language or wish to work with an approachable C++ library.  For more experienced developers, AGK is a productive and easy to learn library supporting a respectable number of platforms with the option of high and low level development.  I just don’t know how well this product would scale with project complexity.

 

The Video

Programming


9. July 2015

 

In this part we are going to explore using Particles in the Godot Engine.  Particles are generally sprites, either provided by you or generated programmatically, that are controlled by a unified system.  Think of a rain storm, each drop of rain would represent a particle, but the entire storm itself would be the particle system.  You would not control each drop individually, instead you would simply say “rain here” and the system takes care of the rest.  Particles are often used for special effects such as smoke, fire or sparks.  The Godot game engine makes working with particles quite simple.

 

There is an HD video version of this tutorial available here or embedded below.

 

This particular tutorial isn’t going to go into a great deal of detail over the effects of each setting, as there is already an excellent illustrated guide right here.  Instead I will focus on using particles in a hands example.  As I mentioned earlier, particle systems are often used to create fire effects, and that’s exactly what we are going to do here, create a flaming/smoking torch.

 

Creating a particle system is as simple as creating a Particles2D node:

image

Creating one will create a simple system for you:

part1

 

As always, the properties are controlled in the Inspector.  In this case we are creating 32 particles with a lifespan of 2 seconds aimed down and affected by gravity:

image

 

Now let’s suit it to our needs.  First we want to change the direction of our particles to up instead of down. This is set by setting the direction property, this is a value in degrees for the particles to be emitted in.

image

 

Here is the result:

part2

Next, since this is a torch, we don’t want the particles to be affected by gravity.  Under Params, simply set Gravity Strength to 0:

image

And the result:

part3

Now white dots are exactly convincing flames… so lets add a bit of color.  This can be done using Color Phases.  These are the colors the a particle will go through during it’s lifetime.  For a torch, we will start with a brilliant white, then orange and finally red, like so:

image

Be sure to set the Count to 3.  You can have up to 4 phases if needed.  Now our results look like:

part4

A bit better.  Now we want to work on the size a bit.  Let’s start our particles off bigger and shrink as they move away from the source of the flame.

image

Resulting in:

part5

 

Finally we slow it down slightly and decrease the spread:

image

 

And voila:

part6

 

A fairly passable torch.  You could play with it a bit, use an image instead of a square particle, change the alpha value of each color phase or add another overlapping particle system to provide smoke.  Keep in mind though, more particles, more processing.

 

Here is a simple layer of smoke added as a separate particle system and the alpha lowered on the final two color phases:

part7

 

Particles are as much an art to create as a texture or 3D model.  Play around until you achieve the effect you want.  Be sure to read that link I post earlier for the effects various settings have on your particle system.  One other area I never touch on was randomization.  In addition to numerous settings for controlling how particles are created, you can also randomize each of those values so your particles end up being less consistent.

 

As mentioned earlier, a particle can also be created from a texture or series of textures.  To set a texture, simply set it’s texture property:

image

 

In this example I am going to use this spritesheet to create a flock of animated birds:

robincropped

 

Set H and V to correspond to the number of rows and columns in your TextureAtlas:

image

 

I am unsure of how to deal with TextureAtlases with empty squares, there doesn’t seem to be a way to set a total count, but I may have overlooked it.  Next you are going to want to specify the speed you want it to jump between frames of animation using Anim Speed Scale

image

I tweaked a few more settings:

image

And my final results are a fairly nice flock of birds:

part8

 

One other feature available is the particle attractor ParticleAttractor2D, which can be used to attract particles, to either fling them out the other side or absorb them.  Think of it like a black hole that either sucks in or spits out the particles in it’s radius of influence:

image

part9

Keep in mind that particles all have a lifespan, and once that lifespan has elapsed, it will fade away.

 

Particles provide a powerful way of implementing tons of graphically similar effects ( like fire, fog, flocking etc ) with a single controlling system.  They are as much art as programming though, so will take some time playing around to get the effect just right.

 

The Video

 

Programming


23. June 2015

 

With the release of version 4.8 of Unreal Engine, playing audio actually became a great deal easier for 2D games with the addition of PlaySound2D.  In this section we are going to learn how to import and play audio files in Unreal Engine.  For the application controller I created a simple UI that fire off the playing of audio.  If unfamiliar with creating a UI with UMG ( Unreal Motion Graphics ), be sure to read the previous tutorial.

 

As always there is an HD video version of this tutorial available right here.

We are going to be creating a simple UI to fire off audio events:

image

 

We will simply wire each button to fire off our examples.  I also needed several audio samples.  I personally downloaded each one from freesound.org.

 

Importing Audio Files

 

First we need some audio to work with.  So then… what audio files work with Unreal Engine?  Mp3, mp4, ogg?  Nope… WAV.  You can import your sound files in whatever format you want, so long as it’s wav.  Don’t worry, this isn’t as big of a hindrance as it sounds, as Unreal simply takes care of the compression and conversion steps required for you.  So the fact your soundtrack is 10MB in size isn’t as damning as it seems, as Unreal will take care of the required conversions for you.  Being in an uncompressed source format enables Unreal to offer a lot of power as you will see shortly.  Also it neatly steps around a number of licensing concerns, such as the patent minefield that is mp3.  If you’re source files aren’t in wav format, you can easily convert using the freely available and completely awesome Audacity sound editor.

 

Your WAV files can be in PCM, ADPCM or DVI ADPCM format, although if using defaults you most likely don’t need to worry about this detail.  They should be 16 bit, little endian (again… generally don’t worry) uncompressed format at any bitrate. 22khz and 44.1khz are recommended however, with the later being the bit rate CD quality audio is encoded at.  Your audio files can be either mono (single channel) or stereo (dual channel), plus you can import up to 8 channels of audio ( generally 8 mono WAV files ) to encoded 7.1 surround sound.  This is way beyond the scope of what we will be covering but more details about 7.1 encoding can be found here.  Importing audio is as simple as using the Import button in the Content Browser, or simple drag and drop.

 

Once imported, you can double click your audio asset to bring up the editor.

image

 

Here you can set a number of properties including the compression amount, wether to loop, the pitch, even add subtitle information.  There isn’t anything we need to modify right now though.  I have imported a couple different mono format wav files, like so:

image

 

And created a simple button to play the audio when pressed:

image

 

Playing Sounds

 

Now let’s wire up the OnClick event to play Thunder.wav, with the following blueprint:

image

 

Yeah… that’s all you need to do, drop in a Play Sound 2D function, pick the Wave file to play and done.  Before 4.8 the only option was Play Sound at Location, which is virtually identical but required a Position component as well.  You can achieve the same effect this way:

image

 

Both Play Sound at Location and Play Sound 2D are fire and forget, in that you have no control over them after the sound has begun to play (other than at a global level, like muting all audio ).  Neither moves with the actor either.

 

What if you want the audio to come from or move with a node in the scene?  This is possible too.   First let’s create a Paper2D character to attach the audio component to.  This process was covered in this tutorial in case you need a refresher.  Don’t forget to create a GameMode as well and configure your newly created controller to be active.

 

Using the Audio Component

 

I created this hierarchy of a character:

image

Notice the Audio component I’ve added?  There are several properties that can be set in the Details panel for the audio component, but the most important is the sound.

image

I went ahead and attached my “music” Sound Wave.  You can set the music file to automatically play using the Activation property:

image

There is also an event available that will fire when your audio file has finished playing. 

image

Unlike PlaySound2D, this sound isn’t fire and forget.   It can also be changed dynamically using the following Blueprint:

image

This blueprint finds the Audio component of our Pawn and then set’s it’s Sound using a call to Play Sound Attached.  As you can see, there are several available properties to set and you can easily position the audio in the world.

 

As I mentioned earlier, you can also manipulate a running Sound wave when attached as an audio component, like so:

image

 

Paradoxically, there doesn’t actually seem to be a method to get the current volume.  The obvious solution is to keep the volume as a variable and pass it to Adjust Volume Level.

 

Sound Cues

So far we’ve only used directly imported Sound Wave files, but every location we used a Wave, we could have also used a Cue.  As you will see, Cues give you an enormous amount of control over your audio.

 

Start by creating a new Sound Cue object:

image

Name it then double click to bring up the Sound Que editor:

image

This is well beyond the scope of this tutorial, but you can essentially make complex sounds out of Sound nodes, like this simple graph mixing two sounds together:

image

 

Again, any of the earlier functions such as Play Sound 2D will take a Cue in place of a Wave.

 

We have only scratched the very surface of audio functionality built into Unreal Engine, but this should be more than enough to get you started in 2D.

 

The Video

Programming


19. June 2015

 

Now we move on to a topic that people always seem to love, graphics!  In the past few chapters/videos I’ve said over and over “don’t worry, we will cover this later”, well… welcome to later. We are primarily going to focus on loading and displaying textures using a SpriteBatch.  As you will quickly discover, this is a more complex subject than it sounds.

 

As always, there is an HD video of the content available here

Before we can proceed too far we need a texture to draw.  A texture can generally be thought of as a 2D image stored in memory.  The source image of a texture can be in bmp, dds, dib, hdr, jpg, pfm, png, ppm or tga formats.  In the “real world” that generally means bmp, jpg or png formats and there is something to be aware of right away.  Of those three formats, only png and some jpgs have an alpha channel, meaning it supports transparency out of the box.  There are however ways to represent transparency in these other formats, as we will see shortly.  If you’ve got no idea which format to pick, or why, pick png.

 

 

Using the Content Pipeline

If you’ve been reading since the beginning you’ve already seen a bit of the content pipeline, but now we are going to actually see it in action with a real world example.  

Do we have to use the content pipeline for images?


I should make it clear, you can load images that haven’t been converted into xnb format. As of XNA 4, a simpler image loading api was added that allowed you to load gif, jpg and png files directly with the ability to crop, scale and save. The content pipeline does a lot for you though, including massaging your texture into a platform friendly format, potentially compressing your image, generation of mip maps or power of two textures, pre-multiplied alpha (explained shortly ), optimized loading and more. MonoGame included a number of methods for directly loading content to make up for it’s lack of a working cross platform pipeline. With the release of the content pipeline tool, these methods are deprecated. Simply put, for game assets ( aka, not screen shots, dynamic images, etc ), you should use the content pipeline.

Create a new project, then in the Contents folder, double click the file Content.mgcb.

image

 

This will open the MonoGame Content Pipeline tool.  Let’s add our texture file, simple select Edit->Add->Existing Item...

image

Navigate to a select a compatible image file.  When prompted chose the mode that makes the most sense.  I want the original to be untouched, so I am choosing Copy the file to the directory.

image

 

Your content project should now look like:

image

The default import settings for our image are fine, but we need to set the Content build platform.  Select Content in the dialog pictured above, then under Platform select the platform you need to build for.

image

Note the two options for Windows, Windows and WindowsGL.  The Windows platform uses a DirectX backend for rendering, while WindowsGL uses OpenGL.  This does have an effect on how content is processed so the difference is important. 

Now select Build->Build, saving when prompted:

image

 

You should get a message that your content was built.

image

We are now finished importing, return to your IDE.

Important Platform Specific Information


One Windows the .mgcb file is all that you need. When the IDE encounters it, it will basically treat it as a symlink and instead refer to the contents it contains. Currently when building on MacOS using Xamarin, you have to manually copy the generated XNB contents into your project and set their build type as Content. The generated files are located in the Output Folder as configured in the Content Pipeline. I have been notified that a fix for this is currently underway, so hopefully the Mac and Windows development experience will be identical soon.
 
Alright, we now have an image to work with, let’s jump into some code.
 
 
 

Loading and displaying a Texture2D

So now we are going to load the texture we just added to the content project, and display it on screen.  Let’s just jump straight into the code.

using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Input;

namespace Example1
{
    public class Game1 : Game
    {
        GraphicsDeviceManager graphics;
        SpriteBatch spriteBatch;
        Texture2D texture;

        public Game1()
        {
            graphics = new GraphicsDeviceManager(this);
            Content.RootDirectory = "Content";
        }

        protected override void Initialize()
        {
            base.Initialize();
        }

        protected override void LoadContent()
        {
            spriteBatch = new SpriteBatch(GraphicsDevice);
            texture = this.Content.Load<Texture2D>("logo");
        }

        protected override void UnloadContent()
        {
            //texture.Dispose(); <-- Only directly loaded
            Content.Unload();
        }

        protected override void Update(GameTime gameTime)
        {
            if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.
                Pressed || Keyboard.GetState().IsKeyDown(Keys.Escape))
                Exit();
            base.Update(gameTime);
        }

        protected override void Draw(GameTime gameTime)
        {
            GraphicsDevice.Clear(Color.CornflowerBlue);

            spriteBatch.Begin();
            spriteBatch.Draw(texture,Vector2.Zero);
            spriteBatch.End();

            base.Draw(gameTime);
        }
    }
}
 
When we run this code we see:
 
image
 
 
Obviously your image will vary from mine, but our texture is drawn on screen at the position (0,0).
 
There are a few key things to notice here.  First we added a Texture2D to our class, which is essentially the in memory container for our texture image.  In LoadContent() we then load our image into our texture using the call:
 
texture = this.Content.Load<Texture2D>("logo");
 
You notice we use our Game's Content member here.  This is an instance of Microsoft.Xna.Framework.ContentManager and it is ultimately responsible for loading binary assets from the content pipeline.  The primary method is the Load() generic method which takes a single parameter, the name of the asset to load minus the extension.  Notice the bold there?  That’s because this is a very common tripping point.  In addition to Texture2D, Load() supports the following types:
  • Effect
  • Model
  • SpriteFont
  • Texture
  • Texture2D
  • TextureCube

It is possible to extend the processor to support additional types, but it is beyond the scope of what we are covering here today.

Next we get to the UnloadContent method, where we simply call Content.Unload();  The ContentManager “owns” all of the content it loads, so this cleans up all of the memory for all of the objects loaded through the ContentManager.  Notice I left a commented out example calling Dispose().  It’s important to know if you load a texture outside of the ContentManager or create one dynamically, it’s is your responsibility to dispose of it or you may leak memory.  You may say, hey, this will all get cleaned up on program exit anyways.  Honestly this isn’t technically wrong, although cleaning up after yourself is certainly a good habit to get into. 

 

Memory Leaks in C#?


Many new to C# developers think because it's managed you can't leak memory. This simply isn't true. While compared to languages like C++, memory management is much simpler in C#, it is still quite possible to have memory leaks. In C# the easiest way is to not Dispose() of classes that implement IDisposable. An object that implements IDisposable owns an unmanaged resource (such as a Texture) and that memory will be leaked if someone doesn't call the Dispose() method. Wrapping the allocation in a using statement will result in Dispose() being called at the end of scope. As a point of trivia, other common C# memory leaks are caused by not removing event listeners and of course, calling leaky native code (pInvoke).
 
Now that we have our texture loaded, its time to display it on screen.  This is done with the following code:
    spriteBatch.Begin();
    spriteBatch.Draw(texture,Vector2.Zero);
    spriteBatch.End();

I will explain the SpriteBatch in a few moments, so let’s instead focus on the Draw() call.  This needs to be called within a Begin()/End() pair.  Let’s just say SpriteBatch.Draw() has A LOT of overloads, that we will look at now.  In this example we simply Draw the passed in texture at the passed in position (0,0).  Next let’s look at a few of the options we have when calling Draw().

Where is 0,0?


Different libraries, frameworks and engines have different coordinate systems. In XNA, like most windowing or UI libraries, the position (0,0) refers to the top left corner of the screen. For sprites, (0,0) refers to the top left corner as well, although this can be changed in code. In many OpenGL based game engines, (0,0) is located at the bottom left corner of the screen. This distinction becomes especially important when you start working with 3rd party libraries like Box2D, which may have a different coordinate system. Using a top left origin system has advantages when dealing with UI, as your existing OS mouse and pixel coordinates are the same as your game's. However the OpenGL approach is more consistent with mathematics, where positive X and Y coordinate values refer to the top right quadrant on a Cartesian plane. Both are valid options, work equally well, just require some brain power to convert between.

 

Translation and Scaling

spriteBatch.Draw(texture, destinationRectangle: new Rectangle(50, 50, 300, 300));
 
This will draw our sprite at the position (50,50) and scaled to a width of 300 and a height of 300.

image

 

Rotated

spriteBatch.Draw(texture, 
    destinationRectangle: new Rectangle(50, 50, 300, 300),
    rotation:-45f
    );

This will rotate the image –45degrees about it’s origin.

image

 

Notice that the rotation was performed relative to the top left corner of the texture.  Quite commonly when rotating and scaling you would rather do it about the sprites mid point.  This is where the origin value comes in.

 

Rotated about the Origin

spriteBatch.Draw(texture,
    destinationRectangle: new Rectangle(150 + 50,150 + 50, 300, 300),
    origin:new Vector2(texture.Width/2,texture.Height/2),
    rotation:-45f
    );

Ok, this one may require a bit of explanation.  The origin is now the midpoint of our texture, however we are going to be translating and scaling relative to our midpoint as well, not the top left.  This means the coordinates passed into our Rectangle need to take this into account if we wish to remained centered.  Also you need to keep in mind that you are resizing the texture as part of the draw call.  This code results in:

image

 

For a bit of clarity, if we hadn’t translated(moved) the above, instead used this code:

spriteBatch.Draw(texture,
    destinationRectangle: new Rectangle(0, 0, 300, 300),
    origin:new Vector2(texture.Width/2,texture.Height/2),
    rotation:-45f
    );
 
We would rotate centered to our sprite, but at the origin of our screen

image

 

So it’s important to consider how the various parameters passed to draw interact with each other!

 

Tinted

spriteBatch.Begin();
spriteBatch.Draw(texture, 
    destinationRectangle: new Rectangle(50, 50, 300, 300),
    color:Color.Red);
spriteBatch.End();
 
image
 
The Color passed in ( in this case Red ) was then added to every pixel in the texture. Notice how it only effects the texture, the Cornflower Blue background is unaffected.  The additive nature of adding red to blue resulted in a black-ish colour, while white pixels simply became red.
 
 
 

Flippped

spriteBatch.Draw(texture, 
    destinationRectangle: new Rectangle(50, 50, 300, 300),
    effects:SpriteEffects.FlipHorizontally|SpriteEffects.FlipVertically
    );

That's about it for draw, now let’s look a bit closer at SpriteBatch.

 

SpriteBatch

 

In order to understand exactly what SpriteBatch does, it’s important to understand how XNA does 2D.  At the end of the day, with modern GPUs, 2D game renderers no longer really exist.  Instead the renderer is actually still working in 3D and faking 2D.  This is done by using an orthographic camera ( explained later, don’t worry ) and drawing to a texture that is plastered on a 2D quad that is parallel to the camera.  SpriteBatch however takes care of this process for you, making it feel like you are still working in 2 dimensions. 

That isn’t it however, SpriteBatch is also a key optimization trick.  Consider if your scene consisted of hundreds of small block shape sprites each consisting of a small 32x32 texture, plus all of the active characters in your scene, each with their own texture being drawn to the screen.  This would result in hundreds or thousands of Direct3D or OpenGL draw calls, which would really hurt performance.  This is where the “Batch” part of sprite batch comes in.  In it’s default operating mode ( deferred ), a simply queues up all of the drawing calls, they aren’t executed until End() is called.  It then tries to “batch” them all together into a single draw call, thus rendering as fast as possible.

There are settings attached to a SpriteBatch called, specified in the Begin() that we will see shortly.  These are the same for every single Draw call within the batch.  Additionally you should try to keep every single Draw call within the batch in the same texture, or within as few different textures as possible.  Each different texture within a batch incurs a performance penalty.  You can also call multiple Begin()/End() pairs in a single render pass, just be aware that the Begin() process is rather expensive and this can quickly hurt performance if you do it too many times.  Don’t worry though, there are ways to easily organize multiple sprites within a single texture.  If by chance you actually want to perform each Draw call as it occurs you can instead run the sprite batch in immediate mode, although since XNA 4 (which MonoGame is based on), there is little reason to use Immediate mode, and the performance penalty is harsh.

One other major function of the SpriteBatch is handling blending, which is how overlapping sprites interact.

 

Sprite Blending

Up until now we’ve used a single sprite with no transparency, so that’s been relatively simple.  Let’s instead look at an example that isn’t entirely opaque.

Let’s go ahead an add a transparent sprite to our content project.  Myself I am going to use this one:

transparentSprite

… I’m sorry, I simply couldn’t resist the pun.  The key part is that your sprite supports transparency, so if you draw it over itself you should see:

transparentSpriteOverlay

 

Now let’s change our code to draw two sprites in XNA.

using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Input;

namespace Example2
{
    public class Game1 : Game
    {
        GraphicsDeviceManager graphics;
        SpriteBatch spriteBatch;
        Texture2D texture;

        public Game1()
        {
            graphics = new GraphicsDeviceManager(this);
            graphics.PreferredBackBufferWidth = 400;
            graphics.PreferredBackBufferHeight = 400;
            Content.RootDirectory = "Content";
        }

        protected override void LoadContent()
        {
            spriteBatch = new SpriteBatch(GraphicsDevice);
            texture = this.Content.Load<Texture2D>("transparentSprite");
        }

        protected override void Update(GameTime gameTime)
        {
            if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.
                Pressed || Keyboard.GetState().IsKeyDown(Keys.Escape))
                Exit();
            base.Update(gameTime);
        }

        protected override void Draw(GameTime gameTime)
        {
            GraphicsDevice.Clear(Color.CornflowerBlue);

            spriteBatch.Begin();
            spriteBatch.Draw(texture, Vector2.Zero);
            spriteBatch.Draw(texture, new Vector2(100,0));
            spriteBatch.End();

            base.Draw(gameTime);
        }
    }
}
 
... and run:
image
Pretty cool.

 

This example worked right out of the box for a couple reasons.  First, our sprite was transparent and identical, so draw order didn’t matter.  Also when we ran the content pipeline, the default importer ( and the default sprite batch blend mode ) is transparency friendly.

image

This setting creates a special transparency channel for your image upon import, which is used by the SpriteBatch when calculating transparency between images.

 

Let’s look at a less trivial example, with a transparent and opaque image instead.

using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Input;

namespace Example2
{
    public class Game1 : Game
    {
        GraphicsDeviceManager graphics;
        SpriteBatch spriteBatch;
        Texture2D texture;
        Texture2D texture2;

        public Game1()
        {
            graphics = new GraphicsDeviceManager(this);
            graphics.PreferredBackBufferWidth = 400;
            graphics.PreferredBackBufferHeight = 400;
            Content.RootDirectory = "Content";
        }

        protected override void LoadContent()
        {
            spriteBatch = new SpriteBatch(GraphicsDevice);
            texture = this.Content.Load<Texture2D>("logo");
            texture2 = this.Content.Load<Texture2D>("transparentSprite");
        }

        protected override void Update(GameTime gameTime)
        {
            if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.
                Pressed || Keyboard.GetState().IsKeyDown(Keys.Escape))
                Exit();
            base.Update(gameTime);
        }

        protected override void Draw(GameTime gameTime)
        {
            GraphicsDevice.Clear(Color.CornflowerBlue);

            spriteBatch.Begin();
            spriteBatch.Draw(texture, Vector2.Zero);
            spriteBatch.Draw(texture2, Vector2.Zero);
            spriteBatch.End();

            base.Draw(gameTime);
        }
    }
}
 
When run:

image

So far, so good.  Now let’s mix up the draw order a bit…

spriteBatch.Begin();
spriteBatch.Draw(texture2, Vector2.Zero);
spriteBatch.Draw(texture, Vector2.Zero);
spriteBatch.End();

… and run:

image

Oh…

As you can see, the order we make Draw calls is by default, the order the sprites are drawn.  As in the second Draw() call will draw over the results of the first Draw() call and so on.

 

There is a way to explicitly set the drawing order:

spriteBatch.Begin(sortMode: SpriteSortMode.FrontToBack);
spriteBatch.Draw(texture2, Vector2.Zero, layerDepth:1.0f);
spriteBatch.Draw(texture, Vector2.Zero, layerDepth:0.0f);
spriteBatch.End();

 

Here you are setting the SpriteBatch sort order to be front front to back, then manually setting the draw layer in each draw call.  If you are guessing, there is also a BackToFront setting.  SpriteSortMode is also what determines if drawing is immediate ( SpriteSortMode.Immediate ) or deferred ( SpriteSortMode.Differed ). 

 

Blend States

 

We mentioned earlier that textures imported using the Content Pipeline by default has a special pre-calculated transparency channel created.  This corresponds with SpriteBatches default BlendState, AlphaBlend.  This uses the magic value created by the pipeline to determine how overlapping transparent sprites are renderer.  If you don’t have a really good reason otherwise, and are using the Content Pipeline to import your textures, you should stick to the default.  I should point out, this behavior only became the default in XNA4, so older tutorials may have much different behavior.

 

The old default used to be interpolative blending, which used the RGBA values of the texture to determine transparency.  This could lead to some strange rendering artifacts ( discussed here: https://en.wikipedia.org/wiki/Alpha_compositing ).  The advantage is, all you need to blend images is an alpha channel, there was no requirement to create a special pre-multiplied channel.  This means you didn’t have to run these images through the content pipeline.  If you wish to do things the “old” way, when importing your assets ( if not simply loaded directly from file ) select false for PreMultiplied alpha in the Texture Importer Processor settings of the Content Pipeline.  Then in your SpriteBatch, do the following:

spriteBatch.Begin(blendState:BlendState.NonPremultiplied);
 
There are additional BlendState options including Additive ( colors are simply added together ) and Opaque ( subsequent draw calls simply overwrite the earlier calls ).  You can have a great deal of control over the BlendState, but most projects simply will not require it.  One other thing I ignored is Chromekeying.  This is another option for supporting transparency basically you dedicate a single color to be transparent, then specify that color in the Content Pipeline.  Essentially you are forming a 1bit alpha channel and are essentially “green screening” like in movies.  Obviously you cannot use the color in your image however.  In exchange for ugly source sprites and extra labor, you save in file size as you don’t need to encode the alpha channel.
 
 
There is some additional functionality built into SpriteBatch, including texture sampling, stencil buffers, matrix transforms and even special effects.  These are well beyond the basics though, so we will have to cover them at a later stage.
 
 

The Video

 

Programming


16. June 2015

 

Now we are going to talk about two important concepts in AI development in 2D games, path following and navigation meshes.  Path following is exactly what you think it is, you create paths and follow them.  This is useful for creating predefined paths in your game.  When you are looking for a bit more dynamic path finding for your characters, Navigation Mesh ( or NavMesh ) come to the rescue.  A NavMesh is simply a polygon mesh that defines where a character can and cannot travel.

 

As always there is an HD video of this tutorial available here.

Let’s start with simple path following.  For both of these examples, we are going to want a simple level to navigate.  I am going to create one simply using a single sprite background that may look somewhat familiar…

image

 

So, we have a game canvas to work with, let’s get a character sprite to follow a predefined path.

 

Path2D and PathFollow2D

 

First we need to start off by creating and defining a path to follow.  Create a new Path2D node:

image

 

This will add additional editing tools to the 2D view:

image

 

Click the Add Point button and start drawing your path, like so:

image

 

Now add a PathFollow2D node, and a Sprite attached to that node, like so:

image

 

There are the following properties on the PathFollow2D node:

image

 

You may find that you start rotated for some reason.  The primary setting of concern though is the Offset property.  This is the distance along the path to travel, we will see it in action shortly.  The Loop value is also important, as this will cause the path to go back to offset 0 once it reaches the end and start the travel all over again.  Finally I clicked Rotate off, as I don’t want the sprite to rotate as it follows the path.

 

Now, create and add a script to your player sprite, like so:

extends Sprite


func _ready():
   set_fixed_process(true)

func _fixed_process(delta):
   get_parent().set_offset(get_parent().get_offset() + (50*delta))

 

This code simply gets the sprites parent ( the PathFollow2D Node ), and increments it’s offset by 50 pixels per second.  You can see the results below:

PathFollow

 

You could have course have controlled the offset value using keyframes and an animation player as described in the previous chapter.

 

So that’s how you can define movement across a predefined path… what about doing things a bit more dynamic?

 

Navigation2D and NavigationPolygon

 

Now let’s create a slightly different node hierarchy.  This time we need to create a Navigation2D Node, either as the root, or attached to the root of the scene.  I just made it the root node.  I also loaded in our level background sprite.  FYI, the sprite doesn’t have to be parented to the Navigation2D node.

image

 

Now we need to add a Nav Mesh to the scene, this is done by creating a NavigationPolygonInstance, as a child node of Navigation2D:

image

 

This changes the menu available in the 2D view again, now we can start drawing the NavMesh.  Start by outlining the entire level.  Keep in mind, the nav mesh is where the character can walk, not where they can’t, so make the outer bounds of your initial polygon the same as the furthest extent the character can walk.  To start, click the Pen icon.  One first click you will be presented this dialog:

image

 

Click create.  Then define the boundary polygon, like so:

image

 

Now using the Pen button again, start defining polygons around the areas the character cant travel.  This will cut those spaces out of the navigation polygon.  After some time, I ended up with something like this:

image

 

So we now have a NavMesh, let’s put it to use.  Godot is now able to calculate the most efficient path between two locations.

For debug reasons, I quickly import a TTF font, you can read this process in Chapter 5 on UI, Widgets and Themes.  Next attach a script to your Navigation2D node.  Then enter the following code:

extends Navigation2D
var path = []
var font = null
var drawTouch = false
var touchPos = Vector2(0,0)
var closestPos = Vector2(0,0)

func _ready():
   font = load("res://arial.fnt")
   set_process_input(true)

func _draw():
   if(path.size()):
      for i in range(path.size()):
         draw_string(font,Vector2(path[i].x,path[i].y - 20),str(i+1))
         draw_circle(path[i],10,Color(1,1,1))
      
      if(drawTouch):
         draw_circle(touchPos,10,Color(0,1,0))  
         draw_circle(closestPos,10,Color(0,1,0))
   

func _input(event):
   if(event.type == InputEvent.MOUSE_BUTTON):
      if(event.button_index == 1):
         if(path.size()):
            touchPos = Vector2(event.x,event.y)
            drawTouch = true
            closestPos = get_closest_point(touchPos)
            print("Drawing touch")
            update()
            
      if(event.button_index == 2):
         path = get_simple_path(get_node("Sprite").get_pos(),Vector2(
                event.x,event.y))
         update()

 

This code has two tasks.  First when the user clicks right, it calculates the closest path between the character sprite and the clicked location.  This is done using the critical function get_simple_path() which returns a Vector2Array of points between the two locations.  Once you’ve calculated at least one path ( the path array needs to be populated ), left clicking outside of the navmesh will then show two circles, one where you clicked, the other representing the closest navigable location, as returned by get_closest_point().

 

Here is our code in action:

PacNav

As you right click, a new path is established drawn in white dots.  Then left clicking marks the location of the click and the nearest walk-able location in the nav polygon.  You may notice the first left click resulted in it drawing a location to the left of the screen.  This is because my navmesh wasn’t water tight, lets look:

image

 

Although miniscule in size, this small spot of polygons is a valid path to the computer.  When setting your nav mesh’s up, be sure to make sure you don’t leave gaps like this!

 

There are a couple things you might notice.  The path returned is the minimum direct navigable line between two points.  It however does not take into account the size of the item you want to move.   This is logic that you need to provide yourself.  In the example of something like PacMan, you are probably better off using a cell based navigation system, based on an algorithm like a*star.  I really wish get_closest_path() allowed you to specify the radius of your sprites bounding circle to determine if the path is actually large enough to travel.  As it stands now, you are going to have to make areas that are too small for your sprite as completely filled.  This renders Navigation2D of little use to nodes of varying sizes.

 

Regardless of the limations, Navigation2D and Path2D provide a great template for 2D based AI development.

 

The Video

Programming


AppGameKit Studio

See More Tutorials on DevGa.me!

Month List