Subscribe to GameFromScratch on YouTube Support GameFromScratch on Patreon

22. June 2015

 

Export to web has been a popular bullet point on most game engine feature lists as of late.  Both Unreal Engine and Unity recently offered native HTML5 export (Unity has had a plugin option for years, but plugins have gone the way of the Dodo).  LibGDX has offered a HTML5 target for ages, Haxe based game engines can all cross compile to HTML or Flash.  GameMaker, Construct, Stencyl and more have HTML5 exporters.   Of course, there are a number of HTML5 native engines (I suppose Construct should be on that list actually…), but they aren’t really what this conversation is about.

 

At the end of the day though… is anyone actually doing it?  Is there a “shipped” title from any of these engines that people are playing in any quantity?  It is my understanding that the vast majority of commercial web and Facebook games are still Flash based, with HTML5 representing perhaps 10-20% of titles.  More importantly, most of these titles are relatively simple, nothing harnessing a fraction of the power of an Unreal or Unity type engine.

 

It leads me to question, is HTML5 a feature everybody wants but nobody uses?

 

What inspired this thought was this interesting article on Godot’s efforts to target the web.  I have been actively monitoring the game development world during the entire timespan Juan described and I witnessed all of these “next big thing” technologies that came and went.  Summarized from the article, they were:

  • Native Plugins
  • Google Native Client
  • Flash
  • ASM.js
  • WebAssembly

 

This is by no means a comprehensive list, LibGDX compiles from Java to JavaScript using Google’s GWT for example, and some of these technologies were certainly successful for a time such as Flash.  Of course too, games written and targeted for HTML5 exist.  It’s this whole HTML5 as an additional target approach that I am beginning to think is just a gimmick.  The funny part is, I love the idea too… I evaluate a lot of game engines and I always see “HTML5 export” and think “ooooh, that’s good”.

 

Games in the browser certainly seemed to have a brilliant future at one point.  The Unity web plugin really did bring Unity to the web.  I used a couple of Unity powered 3D tools, such as Mixamo, and it was very near to native in experience (they have since ported to HTML5 since the Unity plugin was end of life-d).  Google’s Native Client (NaCL) had some promise, with shipped titles such as AirMech leading the way, but a single browser solution was never going to fly.  Ultimately Flash, especially with it’s promising (at the time) 3D API had the most promise, but a combination of Apple’s malevolence and Adobe’s incompetence brought that to an inglorious end.

 

Perhaps it’s just ignorance on my end.  Can anyone out there point me at an HTML5 game (not a tech demo, an actual game) created in either Unity or Unreal Engine?  I realize both are fairly new, so I would also settle for examples that are under development but show promise.  Keep in mind here, I’m not talking about HTML5 game engines, there certainly is a future in HTML5 games and of course they can be wrapped and deployed to a variety of platforms.  No, it’s “HTML5 as an additional target platform” support that seems to be pure gimmick at this point.

Totally Off Topic

19. June 2015

 

Now we move on to a topic that people always seem to love, graphics!  In the past few chapters/videos I’ve said over and over “don’t worry, we will cover this later”, well… welcome to later. We are primarily going to focus on loading and displaying textures using a SpriteBatch.  As you will quickly discover, this is a more complex subject than it sounds.

 

As always, there is an HD video of the content available here

Before we can proceed too far we need a texture to draw.  A texture can generally be thought of as a 2D image stored in memory.  The source image of a texture can be in bmp, dds, dib, hdr, jpg, pfm, png, ppm or tga formats.  In the “real world” that generally means bmp, jpg or png formats and there is something to be aware of right away.  Of those three formats, only png and some jpgs have an alpha channel, meaning it supports transparency out of the box.  There are however ways to represent transparency in these other formats, as we will see shortly.  If you’ve got no idea which format to pick, or why, pick png.

 

 

Using the Content Pipeline

If you’ve been reading since the beginning you’ve already seen a bit of the content pipeline, but now we are going to actually see it in action with a real world example.  

Do we have to use the content pipeline for images?


I should make it clear, you can load images that haven’t been converted into xnb format. As of XNA 4, a simpler image loading api was added that allowed you to load gif, jpg and png files directly with the ability to crop, scale and save. The content pipeline does a lot for you though, including massaging your texture into a platform friendly format, potentially compressing your image, generation of mip maps or power of two textures, pre-multiplied alpha (explained shortly ), optimized loading and more. MonoGame included a number of methods for directly loading content to make up for it’s lack of a working cross platform pipeline. With the release of the content pipeline tool, these methods are deprecated. Simply put, for game assets ( aka, not screen shots, dynamic images, etc ), you should use the content pipeline.

Create a new project, then in the Contents folder, double click the file Content.mgcb.

image

 

This will open the MonoGame Content Pipeline tool.  Let’s add our texture file, simple select Edit->Add->Existing Item...

image

Navigate to a select a compatible image file.  When prompted chose the mode that makes the most sense.  I want the original to be untouched, so I am choosing Copy the file to the directory.

image

 

Your content project should now look like:

image

The default import settings for our image are fine, but we need to set the Content build platform.  Select Content in the dialog pictured above, then under Platform select the platform you need to build for.

image

Note the two options for Windows, Windows and WindowsGL.  The Windows platform uses a DirectX backend for rendering, while WindowsGL uses OpenGL.  This does have an effect on how content is processed so the difference is important. 

Now select Build->Build, saving when prompted:

image

 

You should get a message that your content was built.

image

We are now finished importing, return to your IDE.

Important Platform Specific Information


One Windows the .mgcb file is all that you need. When the IDE encounters it, it will basically treat it as a symlink and instead refer to the contents it contains. Currently when building on MacOS using Xamarin, you have to manually copy the generated XNB contents into your project and set their build type as Content. The generated files are located in the Output Folder as configured in the Content Pipeline. I have been notified that a fix for this is currently underway, so hopefully the Mac and Windows development experience will be identical soon.
 
Alright, we now have an image to work with, let’s jump into some code.
 
 
 

Loading and displaying a Texture2D

So now we are going to load the texture we just added to the content project, and display it on screen.  Let’s just jump straight into the code.

using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Input;

namespace Example1
{
    public class Game1 : Game
    {
        GraphicsDeviceManager graphics;
        SpriteBatch spriteBatch;
        Texture2D texture;

        public Game1()
        {
            graphics = new GraphicsDeviceManager(this);
            Content.RootDirectory = "Content";
        }

        protected override void Initialize()
        {
            base.Initialize();
        }

        protected override void LoadContent()
        {
            spriteBatch = new SpriteBatch(GraphicsDevice);
            texture = this.Content.Load<Texture2D>("logo");
        }

        protected override void UnloadContent()
        {
            //texture.Dispose(); <-- Only directly loaded
            Content.Unload();
        }

        protected override void Update(GameTime gameTime)
        {
            if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.
                Pressed || Keyboard.GetState().IsKeyDown(Keys.Escape))
                Exit();
            base.Update(gameTime);
        }

        protected override void Draw(GameTime gameTime)
        {
            GraphicsDevice.Clear(Color.CornflowerBlue);

            spriteBatch.Begin();
            spriteBatch.Draw(texture,Vector2.Zero);
            spriteBatch.End();

            base.Draw(gameTime);
        }
    }
}
 
When we run this code we see:
 
image
 
 
Obviously your image will vary from mine, but our texture is drawn on screen at the position (0,0).
 
There are a few key things to notice here.  First we added a Texture2D to our class, which is essentially the in memory container for our texture image.  In LoadContent() we then load our image into our texture using the call:
 
texture = this.Content.Load<Texture2D>("logo");
 
You notice we use our Game's Content member here.  This is an instance of Microsoft.Xna.Framework.ContentManager and it is ultimately responsible for loading binary assets from the content pipeline.  The primary method is the Load() generic method which takes a single parameter, the name of the asset to load minus the extension.  Notice the bold there?  That’s because this is a very common tripping point.  In addition to Texture2D, Load() supports the following types:
  • Effect
  • Model
  • SpriteFont
  • Texture
  • Texture2D
  • TextureCube

It is possible to extend the processor to support additional types, but it is beyond the scope of what we are covering here today.

Next we get to the UnloadContent method, where we simply call Content.Unload();  The ContentManager “owns” all of the content it loads, so this cleans up all of the memory for all of the objects loaded through the ContentManager.  Notice I left a commented out example calling Dispose().  It’s important to know if you load a texture outside of the ContentManager or create one dynamically, it’s is your responsibility to dispose of it or you may leak memory.  You may say, hey, this will all get cleaned up on program exit anyways.  Honestly this isn’t technically wrong, although cleaning up after yourself is certainly a good habit to get into. 

 

Memory Leaks in C#?


Many new to C# developers think because it's managed you can't leak memory. This simply isn't true. While compared to languages like C++, memory management is much simpler in C#, it is still quite possible to have memory leaks. In C# the easiest way is to not Dispose() of classes that implement IDisposable. An object that implements IDisposable owns an unmanaged resource (such as a Texture) and that memory will be leaked if someone doesn't call the Dispose() method. Wrapping the allocation in a using statement will result in Dispose() being called at the end of scope. As a point of trivia, other common C# memory leaks are caused by not removing event listeners and of course, calling leaky native code (pInvoke).
 
Now that we have our texture loaded, its time to display it on screen.  This is done with the following code:
    spriteBatch.Begin();
    spriteBatch.Draw(texture,Vector2.Zero);
    spriteBatch.End();

I will explain the SpriteBatch in a few moments, so let’s instead focus on the Draw() call.  This needs to be called within a Begin()/End() pair.  Let’s just say SpriteBatch.Draw() has A LOT of overloads, that we will look at now.  In this example we simply Draw the passed in texture at the passed in position (0,0).  Next let’s look at a few of the options we have when calling Draw().

Where is 0,0?


Different libraries, frameworks and engines have different coordinate systems. In XNA, like most windowing or UI libraries, the position (0,0) refers to the top left corner of the screen. For sprites, (0,0) refers to the top left corner as well, although this can be changed in code. In many OpenGL based game engines, (0,0) is located at the bottom left corner of the screen. This distinction becomes especially important when you start working with 3rd party libraries like Box2D, which may have a different coordinate system. Using a top left origin system has advantages when dealing with UI, as your existing OS mouse and pixel coordinates are the same as your game's. However the OpenGL approach is more consistent with mathematics, where positive X and Y coordinate values refer to the top right quadrant on a Cartesian plane. Both are valid options, work equally well, just require some brain power to convert between.

 

Translation and Scaling

spriteBatch.Draw(texture, destinationRectangle: new Rectangle(50, 50, 300, 300));
 
This will draw our sprite at the position (50,50) and scaled to a width of 300 and a height of 300.

image

 

Rotated

spriteBatch.Draw(texture, 
    destinationRectangle: new Rectangle(50, 50, 300, 300),
    rotation:-45f
    );

This will rotate the image –45degrees about it’s origin.

image

 

Notice that the rotation was performed relative to the top left corner of the texture.  Quite commonly when rotating and scaling you would rather do it about the sprites mid point.  This is where the origin value comes in.

 

Rotated about the Origin

spriteBatch.Draw(texture,
    destinationRectangle: new Rectangle(150 + 50,150 + 50, 300, 300),
    origin:new Vector2(texture.Width/2,texture.Height/2),
    rotation:-45f
    );

Ok, this one may require a bit of explanation.  The origin is now the midpoint of our texture, however we are going to be translating and scaling relative to our midpoint as well, not the top left.  This means the coordinates passed into our Rectangle need to take this into account if we wish to remained centered.  Also you need to keep in mind that you are resizing the texture as part of the draw call.  This code results in:

image

 

For a bit of clarity, if we hadn’t translated(moved) the above, instead used this code:

spriteBatch.Draw(texture,
    destinationRectangle: new Rectangle(0, 0, 300, 300),
    origin:new Vector2(texture.Width/2,texture.Height/2),
    rotation:-45f
    );
 
We would rotate centered to our sprite, but at the origin of our screen

image

 

So it’s important to consider how the various parameters passed to draw interact with each other!

 

Tinted

spriteBatch.Begin();
spriteBatch.Draw(texture, 
    destinationRectangle: new Rectangle(50, 50, 300, 300),
    color:Color.Red);
spriteBatch.End();
 
image
 
The Color passed in ( in this case Red ) was then added to every pixel in the texture. Notice how it only effects the texture, the Cornflower Blue background is unaffected.  The additive nature of adding red to blue resulted in a black-ish colour, while white pixels simply became red.
 
 
 

Flippped

spriteBatch.Draw(texture, 
    destinationRectangle: new Rectangle(50, 50, 300, 300),
    effects:SpriteEffects.FlipHorizontally|SpriteEffects.FlipVertically
    );

That's about it for draw, now let’s look a bit closer at SpriteBatch.

 

SpriteBatch

 

In order to understand exactly what SpriteBatch does, it’s important to understand how XNA does 2D.  At the end of the day, with modern GPUs, 2D game renderers no longer really exist.  Instead the renderer is actually still working in 3D and faking 2D.  This is done by using an orthographic camera ( explained later, don’t worry ) and drawing to a texture that is plastered on a 2D quad that is parallel to the camera.  SpriteBatch however takes care of this process for you, making it feel like you are still working in 2 dimensions. 

That isn’t it however, SpriteBatch is also a key optimization trick.  Consider if your scene consisted of hundreds of small block shape sprites each consisting of a small 32x32 texture, plus all of the active characters in your scene, each with their own texture being drawn to the screen.  This would result in hundreds or thousands of Direct3D or OpenGL draw calls, which would really hurt performance.  This is where the “Batch” part of sprite batch comes in.  In it’s default operating mode ( deferred ), a simply queues up all of the drawing calls, they aren’t executed until End() is called.  It then tries to “batch” them all together into a single draw call, thus rendering as fast as possible.

There are settings attached to a SpriteBatch called, specified in the Begin() that we will see shortly.  These are the same for every single Draw call within the batch.  Additionally you should try to keep every single Draw call within the batch in the same texture, or within as few different textures as possible.  Each different texture within a batch incurs a performance penalty.  You can also call multiple Begin()/End() pairs in a single render pass, just be aware that the Begin() process is rather expensive and this can quickly hurt performance if you do it too many times.  Don’t worry though, there are ways to easily organize multiple sprites within a single texture.  If by chance you actually want to perform each Draw call as it occurs you can instead run the sprite batch in immediate mode, although since XNA 4 (which MonoGame is based on), there is little reason to use Immediate mode, and the performance penalty is harsh.

One other major function of the SpriteBatch is handling blending, which is how overlapping sprites interact.

 

Sprite Blending

Up until now we’ve used a single sprite with no transparency, so that’s been relatively simple.  Let’s instead look at an example that isn’t entirely opaque.

Let’s go ahead an add a transparent sprite to our content project.  Myself I am going to use this one:

transparentSprite

… I’m sorry, I simply couldn’t resist the pun.  The key part is that your sprite supports transparency, so if you draw it over itself you should see:

transparentSpriteOverlay

 

Now let’s change our code to draw two sprites in XNA.

using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Input;

namespace Example2
{
    public class Game1 : Game
    {
        GraphicsDeviceManager graphics;
        SpriteBatch spriteBatch;
        Texture2D texture;

        public Game1()
        {
            graphics = new GraphicsDeviceManager(this);
            graphics.PreferredBackBufferWidth = 400;
            graphics.PreferredBackBufferHeight = 400;
            Content.RootDirectory = "Content";
        }

        protected override void LoadContent()
        {
            spriteBatch = new SpriteBatch(GraphicsDevice);
            texture = this.Content.Load<Texture2D>("transparentSprite");
        }

        protected override void Update(GameTime gameTime)
        {
            if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.
                Pressed || Keyboard.GetState().IsKeyDown(Keys.Escape))
                Exit();
            base.Update(gameTime);
        }

        protected override void Draw(GameTime gameTime)
        {
            GraphicsDevice.Clear(Color.CornflowerBlue);

            spriteBatch.Begin();
            spriteBatch.Draw(texture, Vector2.Zero);
            spriteBatch.Draw(texture, new Vector2(100,0));
            spriteBatch.End();

            base.Draw(gameTime);
        }
    }
}
 
... and run:
image
Pretty cool.

 

This example worked right out of the box for a couple reasons.  First, our sprite was transparent and identical, so draw order didn’t matter.  Also when we ran the content pipeline, the default importer ( and the default sprite batch blend mode ) is transparency friendly.

image

This setting creates a special transparency channel for your image upon import, which is used by the SpriteBatch when calculating transparency between images.

 

Let’s look at a less trivial example, with a transparent and opaque image instead.

using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Input;

namespace Example2
{
    public class Game1 : Game
    {
        GraphicsDeviceManager graphics;
        SpriteBatch spriteBatch;
        Texture2D texture;
        Texture2D texture2;

        public Game1()
        {
            graphics = new GraphicsDeviceManager(this);
            graphics.PreferredBackBufferWidth = 400;
            graphics.PreferredBackBufferHeight = 400;
            Content.RootDirectory = "Content";
        }

        protected override void LoadContent()
        {
            spriteBatch = new SpriteBatch(GraphicsDevice);
            texture = this.Content.Load<Texture2D>("logo");
            texture2 = this.Content.Load<Texture2D>("transparentSprite");
        }

        protected override void Update(GameTime gameTime)
        {
            if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.
                Pressed || Keyboard.GetState().IsKeyDown(Keys.Escape))
                Exit();
            base.Update(gameTime);
        }

        protected override void Draw(GameTime gameTime)
        {
            GraphicsDevice.Clear(Color.CornflowerBlue);

            spriteBatch.Begin();
            spriteBatch.Draw(texture, Vector2.Zero);
            spriteBatch.Draw(texture2, Vector2.Zero);
            spriteBatch.End();

            base.Draw(gameTime);
        }
    }
}
 
When run:

image

So far, so good.  Now let’s mix up the draw order a bit…

spriteBatch.Begin();
spriteBatch.Draw(texture2, Vector2.Zero);
spriteBatch.Draw(texture, Vector2.Zero);
spriteBatch.End();

… and run:

image

Oh…

As you can see, the order we make Draw calls is by default, the order the sprites are drawn.  As in the second Draw() call will draw over the results of the first Draw() call and so on.

 

There is a way to explicitly set the drawing order:

spriteBatch.Begin(sortMode: SpriteSortMode.FrontToBack);
spriteBatch.Draw(texture2, Vector2.Zero, layerDepth:1.0f);
spriteBatch.Draw(texture, Vector2.Zero, layerDepth:0.0f);
spriteBatch.End();

 

Here you are setting the SpriteBatch sort order to be front front to back, then manually setting the draw layer in each draw call.  If you are guessing, there is also a BackToFront setting.  SpriteSortMode is also what determines if drawing is immediate ( SpriteSortMode.Immediate ) or deferred ( SpriteSortMode.Differed ). 

 

Blend States

 

We mentioned earlier that textures imported using the Content Pipeline by default has a special pre-calculated transparency channel created.  This corresponds with SpriteBatches default BlendState, AlphaBlend.  This uses the magic value created by the pipeline to determine how overlapping transparent sprites are renderer.  If you don’t have a really good reason otherwise, and are using the Content Pipeline to import your textures, you should stick to the default.  I should point out, this behavior only became the default in XNA4, so older tutorials may have much different behavior.

 

The old default used to be interpolative blending, which used the RGBA values of the texture to determine transparency.  This could lead to some strange rendering artifacts ( discussed here: https://en.wikipedia.org/wiki/Alpha_compositing ).  The advantage is, all you need to blend images is an alpha channel, there was no requirement to create a special pre-multiplied channel.  This means you didn’t have to run these images through the content pipeline.  If you wish to do things the “old” way, when importing your assets ( if not simply loaded directly from file ) select false for PreMultiplied alpha in the Texture Importer Processor settings of the Content Pipeline.  Then in your SpriteBatch, do the following:

spriteBatch.Begin(blendState:BlendState.NonPremultiplied);
 
There are additional BlendState options including Additive ( colors are simply added together ) and Opaque ( subsequent draw calls simply overwrite the earlier calls ).  You can have a great deal of control over the BlendState, but most projects simply will not require it.  One other thing I ignored is Chromekeying.  This is another option for supporting transparency basically you dedicate a single color to be transparent, then specify that color in the Content Pipeline.  Essentially you are forming a 1bit alpha channel and are essentially “green screening” like in movies.  Obviously you cannot use the color in your image however.  In exchange for ugly source sprites and extra labor, you save in file size as you don’t need to encode the alpha channel.
 
 
There is some additional functionality built into SpriteBatch, including texture sampling, stencil buffers, matrix transforms and even special effects.  These are well beyond the basics though, so we will have to cover them at a later stage.
 
 

The Video

 

Programming , , ,

18. June 2015

 

So I decided to take a look at the process of extending the Godot Engine, and to my horror I discovered there is no solution file!  UGH…  Yeah, you can compile from the command line, but that’s not that pleasant of a development experience to an IDE warrior like myself.  I thought I found a solution that I posted about here and I almost did, but it still left me unable to debug and with mostly broken Intellisense support… so, yeah, not great.  Then I found this post, which although it didn’t work, it did fill in the missing pieces.  So, if you want to work with Godot using Visual Studio, here’s how you do it.

 

First you need to have a properly configured development environment… meaning Visual Studio, git and python, all in your PATH.  Next launch a Visual Studio command line, this guy…

image 

 

Make and change to the directory you want to install Godot.  Keep in mind this process will make a godot folder, so if you want c:\godot\ run the following from c:\.

 

git clone https://github.com/okamstudio/godot.git

This will download the latest source for Godot Engine.  Next we want to build it for the first time and generate a Visual Studio solution file.  cd into the Godot directory and run:

scons vsproj=yes platform=windows

If you get an error here, first be sure you are using a Visual Studio command prompt, next be certain you are in the correct directory.  Otherwise your computer should churn away for a few minutes while godot libraries and tools are built.

After several minutes, in an ideal world you should see:

image

 

This means Godot successfully built and it created your Visual Studio project files.  Woot.  Now time to get Visual Studio to actually work.

 

First in the root of your project directory ( C:\Godot\ in my case ), create a file named build.bat, with the following contents:

set vc_path=%1
call %vc_path% & scons platform=windows

 

Next load the generated sln file in Visual Studio.  Give it a minute or two to parse all the files.  You will notice massive amounts of Intellisense errors, don’t worry, we will fix those next.

In Solution Explorer, right click your Project ( not solution, Project! ) and select Properties.

image

Select VC++ Directories then double click Include Directories:

image

Append the following to the value in Include Directories:

$(ProjectDir);$(ProjectDir)/core;$(ProjectDir)/core/math;$(ProjectDir)/tools;$(ProjectDir)/drivers;$(ProjectDir)/platform/windows;

This adds the include directories Godot depends on.

 

Next click NMake on the left hand side.  We now want to replace the contents of Build Command Line and Output to:

image

Then click Apply then OK.

 

You should now be able to hit Build->Build Solution.  If all went right, you should see build progress in the Output panel:

image

 

You can now run and debug as normal, set breakpoints within the code, hit F5 and Godot editor will run.  Keep in mind, it’s the Godot Editor you are debugging, not the library, although since you have full source you should easily be able to step into Godot core code, you just may not be able to set breakpoints there.

 

You are now ready to extend or contribute to Godot using the full power and comfort of Visual Studio.

 

The following is a video showing exactly this process, just in case I missed a step.

 

Programming ,

18. June 2015

 

I’ve never really been been much into the various cross platform build tools like SCons or CMake, I just don’t maintain all that much cross platform code, so all the extra initial effort just hasn’t been worth it.

 

Well last night I found myself wanting to edit Godot’s source code, so let me just fire up Visual Studio and…  ahhhh crap, no Visual Studio project!  Hmmm, this is just annoying.  I don’t mind building from the command line but I certainly mind giving up my IDE and more importantly Intellisense.  So I resign with having to create my own Visual Studio solution (sln) file.  This is always a somewhat annoying process as you’ve got to figure out all the various dependencies and recreate them.

 

Then I have a thought… hmmmm, I wonder if this functionality is built into SCons?  I mean, CMake is essentially a build file builder ( it generates a project file for your required platform/compiler of choice ), perhaps this functionality is built into scons.  Lo and behold, it is!  If you are working on Windows with Visual Studio, simply fire up a Visual Studio command line so you have the proper environment variables set.

 

Then simply change to the directory you installed the Godot source ( or your other SCons built project, just locate the SConstruct file ) and type:

scons vsproj=yes platform=windows

… and voila.

image

The only downside is the NMake build commands don’t work.  If you check your project you see:

image

These are the commands that are run when you choose Build/Rebuild and Clean, and basically each just call Scons.  Oddly for me though they don’t seem to run in the proper environment ( VC isn’t found ) and I can’t specify the platform=windows required to run. 

 

I’m at a bit of a loss however how I would actually debug the project, which is somewhat annoying…

 

EDIT:

Hmmm, this seems to set me up for editing which is cool, but I believe I lose access to the single best feature of Visual Studio… the debugger.

Programming

16. June 2015

 

Now we are going to talk about two important concepts in AI development in 2D games, path following and navigation meshes.  Path following is exactly what you think it is, you create paths and follow them.  This is useful for creating predefined paths in your game.  When you are looking for a bit more dynamic path finding for your characters, Navigation Mesh ( or NavMesh ) come to the rescue.  A NavMesh is simply a polygon mesh that defines where a character can and cannot travel.

 

As always there is an HD video of this tutorial available here.

Let’s start with simple path following.  For both of these examples, we are going to want a simple level to navigate.  I am going to create one simply using a single sprite background that may look somewhat familiar…

image

 

So, we have a game canvas to work with, let’s get a character sprite to follow a predefined path.

 

Path2D and PathFollow2D

 

First we need to start off by creating and defining a path to follow.  Create a new Path2D node:

image

 

This will add additional editing tools to the 2D view:

image

 

Click the Add Point button and start drawing your path, like so:

image

 

Now add a PathFollow2D node, and a Sprite attached to that node, like so:

image

 

There are the following properties on the PathFollow2D node:

image

 

You may find that you start rotated for some reason.  The primary setting of concern though is the Offset property.  This is the distance along the path to travel, we will see it in action shortly.  The Loop value is also important, as this will cause the path to go back to offset 0 once it reaches the end and start the travel all over again.  Finally I clicked Rotate off, as I don’t want the sprite to rotate as it follows the path.

 

Now, create and add a script to your player sprite, like so:

extends Sprite


func _ready():
   set_fixed_process(true)

func _fixed_process(delta):
   get_parent().set_offset(get_parent().get_offset() + (50*delta))

 

This code simply gets the sprites parent ( the PathFollow2D Node ), and increments it’s offset by 50 pixels per second.  You can see the results below:

PathFollow

 

You could have course have controlled the offset value using keyframes and an animation player as described in the previous chapter.

 

So that’s how you can define movement across a predefined path… what about doing things a bit more dynamic?

 

Navigation2D and NavigationPolygon

 

Now let’s create a slightly different node hierarchy.  This time we need to create a Navigation2D Node, either as the root, or attached to the root of the scene.  I just made it the root node.  I also loaded in our level background sprite.  FYI, the sprite doesn’t have to be parented to the Navigation2D node.

image

 

Now we need to add a Nav Mesh to the scene, this is done by creating a NavigationPolygonInstance, as a child node of Navigation2D:

image

 

This changes the menu available in the 2D view again, now we can start drawing the NavMesh.  Start by outlining the entire level.  Keep in mind, the nav mesh is where the character can walk, not where they can’t, so make the outer bounds of your initial polygon the same as the furthest extent the character can walk.  To start, click the Pen icon.  One first click you will be presented this dialog:

image

 

Click create.  Then define the boundary polygon, like so:

image

 

Now using the Pen button again, start defining polygons around the areas the character cant travel.  This will cut those spaces out of the navigation polygon.  After some time, I ended up with something like this:

image

 

So we now have a NavMesh, let’s put it to use.  Godot is now able to calculate the most efficient path between two locations.

For debug reasons, I quickly import a TTF font, you can read this process in Chapter 5 on UI, Widgets and Themes.  Next attach a script to your Navigation2D node.  Then enter the following code:

extends Navigation2D
var path = []
var font = null
var drawTouch = false
var touchPos = Vector2(0,0)
var closestPos = Vector2(0,0)

func _ready():
   font = load("res://arial.fnt")
   set_process_input(true)

func _draw():
   if(path.size()):
      for i in range(path.size()):
         draw_string(font,Vector2(path[i].x,path[i].y - 20),str(i+1))
         draw_circle(path[i],10,Color(1,1,1))
      
      if(drawTouch):
         draw_circle(touchPos,10,Color(0,1,0))  
         draw_circle(closestPos,10,Color(0,1,0))
   

func _input(event):
   if(event.type == InputEvent.MOUSE_BUTTON):
      if(event.button_index == 1):
         if(path.size()):
            touchPos = Vector2(event.x,event.y)
            drawTouch = true
            closestPos = get_closest_point(touchPos)
            print("Drawing touch")
            update()
            
      if(event.button_index == 2):
         path = get_simple_path(get_node("Sprite").get_pos(),Vector2(
                event.x,event.y))
         update()

 

This code has two tasks.  First when the user clicks right, it calculates the closest path between the character sprite and the clicked location.  This is done using the critical function get_simple_path() which returns a Vector2Array of points between the two locations.  Once you’ve calculated at least one path ( the path array needs to be populated ), left clicking outside of the navmesh will then show two circles, one where you clicked, the other representing the closest navigable location, as returned by get_closest_point().

 

Here is our code in action:

PacNav

As you right click, a new path is established drawn in white dots.  Then left clicking marks the location of the click and the nearest walk-able location in the nav polygon.  You may notice the first left click resulted in it drawing a location to the left of the screen.  This is because my navmesh wasn’t water tight, lets look:

image

 

Although miniscule in size, this small spot of polygons is a valid path to the computer.  When setting your nav mesh’s up, be sure to make sure you don’t leave gaps like this!

 

There are a couple things you might notice.  The path returned is the minimum direct navigable line between two points.  It however does not take into account the size of the item you want to move.   This is logic that you need to provide yourself.  In the example of something like PacMan, you are probably better off using a cell based navigation system, based on an algorithm like a*star.  I really wish get_closest_path() allowed you to specify the radius of your sprites bounding circle to determine if the path is actually large enough to travel.  As it stands now, you are going to have to make areas that are too small for your sprite as completely filled.  This renders Navigation2D of little use to nodes of varying sizes.

 

Regardless of the limations, Navigation2D and Path2D provide a great template for 2D based AI development.

 

The Video

Programming , ,

Month List

Popular Comments

nVidia entering mobile console market. Cool features wrapped up in a horrible design.
Subscribe to GameFromScratch on YouTube Support GameFromScratch on Patreon


7. January 2013

As per this story on The Register, nVidia has announced they are creating a new handheld console named "Shield".

 

Let's start off with the most obvious.  It's ugly, exceedingly ugly.  The form factor isn't exactly practical either, this isn't a handheld you are going to be slipping in your pocket anytime soon.  Which begs the question then… why make a handheld at all?

 

 

The Shield, nVidia's clamshell portable console.

nVidia Shield Handheld console

 

Told you it was ugly!  Basically it's a tablet pasted to an Xbox 360 controller.  I am surprised with the recent spat of clones and copycat controllers, like this one or worse... the WiiU, that Microsoft isn't launching lawsuits.

 

Alright, those are all the negatives, now to some positives.  First off the thing is beefy.  It's powered by nVidia's Tegra4 chipset and a quad core ARM processor.  It runs Android, Jelly Bean currently and should be compatible with all the games in the current App Store.  Although I can't really imagine touch only games being comfortable if the controller isn't detachable.  It also ships with HDMI out, for those looking for the full console experience.  The screen itself is 720p.

 

Now the one feature that might actually redeem the thing… it can connect to your PC and play your Steam games by streaming them over wifi.  Now that feature… is neat.  I've actually done this already with existing iOS and Android software, and the latency is actually fine, it's always the controls that let you down.  So I expect the experience to actually be quite nice.  However, if you are home and able to stream your Steam games, why not just play them on your PC?  Now if it lets you buy games from Steam without a PC, that could be a game changer.

 

As you may be able to tell from my text, I think nVidia are crazy.  It's too niche a product entering a segment that is struggling already.  Especially when the OUYA Android console already exist and Steam are rumoured to be bringing out a console shortly.  Plus, it really is ugly and unwieldy.  

 

It's apparently in beta now and could be coming to market within months.

News, Totally Off Topic

blog comments powered by Disqus

Month List

Popular Comments