28. June 2015

 

In this chapter we are going to explore handling input from the keyboard, mouse and gamepad in your MonoGame game.  XNA/MonoGame also have support for mobile specific input such as motion and touch screens, we will cover theses topics in a later topic.

 

There is an HD video of this chapter available here: [coming soon]

 

XNA input capabilities were at once powerful, straightforward and a bit lacking.  If you come from another game engine or library you may be shocked to discovered there is no event driven interface out of the box for example.  All input in XNA is done via polling, if you want an event layer, you build it yourself or use one of the existing 3rd party implementations.  On the other hand, as you are about to see, the provided interfaces are incredibly consistent and easy to learn.

 

Handling Keyboard Input

 

Let’s start straight away with a code sample:

using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Input;
using System.Text;

namespace Example1
{
    public class Game1 : Game
    {
        GraphicsDeviceManager graphics;
        SpriteBatch spriteBatch;
        Vector2 position;
        Texture2D texture;

        public Game1()
        {
            graphics = new GraphicsDeviceManager(this);
            Content.RootDirectory = "Content";
        }

        protected override void Initialize()
        {
            base.Initialize();
            position = new Vector2(graphics.GraphicsDevice.Viewport.
                       Width / 2 -64, 
                                    graphics.GraphicsDevice.Viewport.
                                    Height / 2 -64);
        }

        protected override void LoadContent()
        {
            spriteBatch = new SpriteBatch(GraphicsDevice);
            texture = this.Content.Load<Texture2D>("logo128");
        }

        protected override void UnloadContent()
        {
        }

        protected override void Update(GameTime gameTime)
        {
            // Poll for current keyboard state
            KeyboardState state = Keyboard.GetState();
            
            // If they hit esc, exit
            if (state.IsKeyDown(Keys.Escape))
                Exit();

            // Print to debug console currently pressed keys
            System.Text.StringBuilder sb = new StringBuilder();
            foreach (var key in state.GetPressedKeys())
                sb.Append("Key: ").Append(key).Append(" pressed ");

            if (sb.Length > 0)
                System.Diagnostics.Debug.WriteLine(sb.ToString());
            else
                System.Diagnostics.Debug.WriteLine("No Keys pressed");
            
            // Move our sprite based on arrow keys being pressed:
            if (state.IsKeyDown(Keys.Right))
                position.X += 10;
            if (state.IsKeyDown(Keys.Left))
                position.X -= 10;
            if (state.IsKeyDown(Keys.Up))
                position.Y -= 10;
            if (state.IsKeyDown(Keys.Down))
                position.Y += 10;

            base.Update(gameTime);
        }

        protected override void Draw(GameTime gameTime)
        {
            GraphicsDevice.Clear(Color.CornflowerBlue);

            spriteBatch.Begin();
            spriteBatch.Draw(texture, position);
            spriteBatch.End();
            base.Draw(gameTime);
        }
    }
}

 

We are going to re-use the same basic example for all the examples in this chapter.  It simply draws a sprite centered to the screen, then we manipulate the position in Update()

image

 

In this particular example, when the user hits keys, they are logged to the debug console:

image

 

Now let’s take a look at the keyboard specific code.  It all starts with a call to Keyboard.GetState(), this returns a struct containing the current state of the keyboard, including modifier keys like Control or Shift.  It also contains a method named GetPressedKeys() which returns an array of all the keys that are currently pressed.  In this example we simply loop through the pressed keys, writing them out to debug.  Finally we poll the pressed state of the arrow keys and move our position accordingly.

 

Handling Key State Changes

One thing you might notice with XNA is you are simply checking the current state of a key.  So if a key is pressed or not.  What if you only want to respond when the key is first pressed?  This requires a bit of work on your behalf.

    public class Game1 : Game
    {
        GraphicsDeviceManager graphics;
        SpriteBatch spriteBatch;
        Vector2 position;
        Texture2D texture;
        KeyboardState previousState;

        public Game1()
        {
            graphics = new GraphicsDeviceManager(this);
            Content.RootDirectory = "Content";
        }

        protected override void Initialize()
        {
            base.Initialize();
            position = new Vector2(graphics.GraphicsDevice.Viewport.
                       Width / 2 -64, 
                                    graphics.GraphicsDevice.Viewport.
                                    Height / 2 -64);

            previousState = Keyboard.GetState();
        }

        protected override void LoadContent()
        {
            spriteBatch = new SpriteBatch(GraphicsDevice);
            texture = this.Content.Load<Texture2D>("logo128");
        }

        protected override void Update(GameTime gameTime)
        {
            KeyboardState state = Keyboard.GetState();
            
            // If they hit esc, exit
            if (state.IsKeyDown(Keys.Escape))
                Exit();

            // Move our sprite based on arrow keys being pressed:
            if (state.IsKeyDown(Keys.Right) & !previousState.IsKeyDown(
                Keys.Right))
                position.X += 10;
            if (state.IsKeyDown(Keys.Left) & !previousState.IsKeyDown(
                Keys.Left))
                position.X -= 10;
            if (state.IsKeyDown(Keys.Up))
                position.Y -= 10;
            if (state.IsKeyDown(Keys.Down))
                position.Y += 10;

            base.Update(gameTime);

            previousState = state;
        }

        protected override void Draw(GameTime gameTime)
        {
            GraphicsDevice.Clear(Color.CornflowerBlue);

            spriteBatch.Begin();
            spriteBatch.Draw(texture, position);
            spriteBatch.End();
            base.Draw(gameTime);
        }
    }

 

The changes to our code are highlighted.  Essentially if you want to check for changes in input state ( this applies to gamepad and mouse events too ), you need to track them yourself.  This is a matter of keeping a copy of the previous state, then in your input check you check not only if a key is pressed, but also if it was pressed in the previous state.  If it isn’t this is a new key press and we respond accordingly.  In the above example on Left or Right arrow presses, we only respond to new key presses, so moving left or right requires repeatedly hitting and releasing the arrow key.

 

Handling Mouse Input

Next we explore the process of handling Mouse input.  You will notice the process is almost identical to keyboard handling.  Once again, let’s jump right in with a code example.

using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Input;
using System.Text;

namespace Example2
{
    public class Game1 : Game
    {
        GraphicsDeviceManager graphics;
        SpriteBatch spriteBatch;
        Vector2 position;
        Texture2D texture;

        public Game1()
        {
            graphics = new GraphicsDeviceManager(this);
            Content.RootDirectory = "Content";
        }

        protected override void Initialize()
        {
            base.Initialize();
            position = new Vector2(graphics.GraphicsDevice.Viewport.
                       Width / 2 ,
                                    graphics.GraphicsDevice.Viewport.
                                    Height / 2 );

            
        }

        protected override void LoadContent()
        {
            spriteBatch = new SpriteBatch(GraphicsDevice);
            texture = this.Content.Load<Texture2D>("logo128");
        }

        protected override void Update(GameTime gameTime)
        {
            MouseState state = Mouse.GetState();

            // Update our sprites position to the current cursor 
            location
            position.X = state.X;
            position.Y = state.Y;

            // Check if Right Mouse Button pressed, if so, exit
            if (state.RightButton == ButtonState.Pressed)
                Exit();

            base.Update(gameTime);
        }

        protected override void Draw(GameTime gameTime)
        {
            GraphicsDevice.Clear(Color.CornflowerBlue);

            spriteBatch.Begin();
            spriteBatch.Draw(texture, position, origin:new Vector2(64,
                             64));
            spriteBatch.End();
            base.Draw(gameTime);
        }
    }
}

 

When you run this example, the texture will move around relative to the location of the mouse.  When the user clicks right, the application exits.  The logic works almost identically to handling Keyboard input.  Each frame you check the MouseState by calling Mouse.GetState().  This MouseState struct contains the current mouse X and Y as well as the status of the left, right and middle mouse buttons and the scroll wheel position.  You may notice there are also values for XButton1 and XButton2, these buttons can change from device to device, but generally represent a forward and back navigation button.  On devices with no mouse support, X and Y will always be 0 while each button state will always be set to ButtonState.Released. If you are dealing with a multi touch device this code will continue to work, although the values will only reflect the primary (first) touch point.  We will discuss mobile input in more detail in a later chapter.  As with handling Keyboard events, if you want to track changes in event state, you will have to track them yourself.

 

If you add the following code to your update, you will notice some interesting things about the X,Y position of the mouse:

        protected override void Update(GameTime gameTime)
        {
            MouseState state = Mouse.GetState();

            // Update our sprites position to the current cursor 
            location
            position.X = state.X;
            position.Y = state.Y;

            System.Diagnostics.Debug.WriteLine(position.X.ToString() + 
                                   "," + position.Y.ToString());
            // Check if Right Mouse Button pressed, if so, exit
            if (state.RightButton == ButtonState.Pressed)
                Exit();

            base.Update(gameTime);
        }

The X and Y values are relative to the Window’s origin.  That is (0,0) is the top left corner of the drawable portion of the window, while (width,height) is the bottom right corner.  However, if you are in a Windowed app, the mouse pointer location continues to be updated, still relative to the top left corner of the application window.

image

 

You can also set the position of the cursor in code using the following line:

            if(state.MiddleButton == ButtonState.Pressed)
                Mouse.SetPosition(graphics.GraphicsDevice.Viewport.
                                  Width / 2,
                    graphics.GraphicsDevice.Viewport.Height / 2);

 

This code will center the mouse position to the middle of the screen when the user presses the middle button.

 

Finally its common to want to display the mouse cursor, this is easily accomplished using:

            IsMouseVisible = true;

This member of the Game class toggles the visibility of the system mouse cursor.

image

 

 

Handling Gamepad Input

 

Now we will look at handling input from a gamepad or joystick controller.  You probably wont be surprised to discover the process is remarkably consistent.

using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Input;
using System.Text;

namespace Example3
{
    public class Game1 : Game
    {
        GraphicsDeviceManager graphics;
        SpriteBatch spriteBatch;
        Vector2 position;
        Texture2D texture;

        public Game1()
        {
            graphics = new GraphicsDeviceManager(this);
            Content.RootDirectory = "Content";
        }

        protected override void Initialize()
        {
            base.Initialize();
            position = new Vector2(graphics.GraphicsDevice.Viewport.
                       Width / 2,
                                    graphics.GraphicsDevice.Viewport.
                                    Height / 2);
        }

        protected override void LoadContent()
        {
            spriteBatch = new SpriteBatch(GraphicsDevice);
            texture = this.Content.Load<Texture2D>("logo128");
        }

        protected override void Update(GameTime gameTime)
        {
            if (Keyboard.GetState().IsKeyDown(Keys.Escape)) Exit();

            // Check the device for Player One
            GamePadCapabilities capabilities = GamePad.GetCapabilities(
                                               PlayerIndex.One);
            
            // If there a controller attached, handle it
            if (capabilities.IsConnected)
            {
                // Get the current state of Controller1
                GamePadState state = GamePad.GetState(PlayerIndex.One);

                // You can check explicitly if a gamepad has support 
                for a certain feature
                if (capabilities.HasLeftXThumbStick)
                {
                    // Check teh direction in X axis of left analog 
                    stick
                    if (state.ThumbSticks.Left.X < -0.5f) 
                        position.X -= 10.0f;
                    if (state.ThumbSticks.Left.X > 0.5f) 
                        position.X += 10.0f;
                }

                // You can also check the controllers "type"
                if (capabilities.GamePadType == GamePadType.GamePad)
                {
                    if (state.IsButtonDown(Buttons.A))
                        Exit();
                }
            }
            base.Update(gameTime);
        }

        protected override void Draw(GameTime gameTime)
        {
            GraphicsDevice.Clear(Color.CornflowerBlue);

            spriteBatch.Begin();
            spriteBatch.Draw(texture, position, origin: new Vector2(64,
                             64));
            spriteBatch.End();
            base.Draw(gameTime);
        }
    }
}

 

When you run this example, if there is a controller attached, pressing left or right on the analog stick with move the sprite accordingly.  Hitting the A button (or pressing Escape for those without a controller) will cause the game to exit.

 

The logic here is remarkably consistent with Mouse and Keyboard event handling.  The primary difference is the number of controllers attached and the capabilities of each controller can vary massively, so the code needs to responding appropriately.  In the above example, we check only for the first controller attached, by passing PlayerIndex to our GetEvents call.  You can have up to 4 controllers attached, and each needs to be polled separately. 

 

Supported Gamepads


On the PC there are a plethora of devices available with a wide range of capabilities. You can have up to 4 different controllers attached, each accesible by passing the appropriate PlayerIndex to GetEvents(). The following device types can be returned:

  • AlternateGuitar
  • ArcadeStick
  • BigButtonPad
  • DancePad
  • DrumKit
  • FlightStick
  • GamePad
  • Guitar
  • Unknown
  • Wheel


Obviously each devices supports a different set of features, which can be polled invdividually using the GamePadCapabilities struct returned by Gamepad.GetCapabilities().


Buttons on a GamePad controller are treated just like Keys and MouseButtons, with a value of Pressed or Released.  Once again, if you want to track changes in state you need to code this functionality yourself.  When dealing with Analog sticks, the value returned is a Vector2 representing the current position of the stick.  A value of 1.0 represents a stick that is fully up or right, while a value of –1.0f represents a stick that is full left or down.  A stick at 0,0 is un-pressed.

 

There is a small challenge with dealing with analog controls however that you should be aware of.  Even when a stick is not pressed, it is almost never at the position (0.0f,0.0f), the sensors often return very small fluctuations from complete zero.  This means if you respond directly to input without taking into account these small variations, your sprites will “twitch” while they are supposed to be stationary.  This is worked around using something called a dead zone.  This is a range of motions, or motion values, that are considered too small to be registered.  You can think of a deadzone as a value that is “close enough to zero to be considered zero”.

 

You have a couple options with XNA/MonoGame for dealing with deadzones.  The default is IndependentAxis, which compares each axis against the deadzone separately, Circular which combines the X and Y values together before comparison to the dead zone (recommended for controls the use both axis together, such as a thumbstick controlling 3D view), and finally None, which ignores the deadzone completely.  You would generally choose None if you don’t care about a dead zone, or wish to implement it yourself.

           GamePadState state = GamePad.GetState(PlayerIndex.One, 
                                GamePadDeadZone.Circular);

 

As you can see, XNA's Input handling is somewhat sparse compared to other game engines, but does provide the building blocks to make more complex systems if required. The approach to handling input across devices is remarkably consistent, making it easier to use and hopefully resulting in less unexpected behavior and bugs.

 

 

The Video

[Coming Soon]

Programming , , ,

26. June 2015

 

The lattice object/modifier pairing is a under appreciated modeling tool in Blender, which makes it fast and simple to make sweeping changes to high density polygon meshes, in a way much faster than even sculpting.

 

This quick Blender Tip tutorial video ( available in HD here ), illustrates the process.

 

 

Basically, you envelop your existing mesh in a Lattice Object:

image

 

Like so:

image

 

Zoomed to illustrate better:

image

 

Then in object mode with your mesh selected, go to the modifiers tab and add a Lattice Modifier, click Object and select your Lattice:

image

 

Now select the Lattice object, and set the number of divisions you wish ( how many control points in each direction ):

image

 

Switch to Edit mode and make whatever alterations you wish, changes to the lattice will be applied to the underlying shape:

image

When done, with the Mesh selected, go to modifiers and click Apply:

image

 

You can now delete the Lattice, and tada, updated polygon mesh:

image

 

The video also shows how to set a shape key for animation if you are curious. 

Art , ,

24. June 2015

 

The MonoGame tutorial series has been written from day one with the intention of being compiled into book format.  As a thank you to GameFromScratch.com Patreon backers WIP copies ( as well as finishedMonogamebook books ) are available for download.

 

 

This represent the first compilation of Cross Platform Game Development with MonoGame and contains all of the tutorial series to date:

  • An Introduction and Brief History of XNA and Mo
    noGame
  • Getting Started with MonoGame on Windows
  • Getting Started with MonoGame on MacOS
  • Creating an Application
  • Textures and SpriteBatch

 

These represent early drafts, so the formatting isn’t final, there needs to be a thorough proof reading, some images need to be regenerated and of course “book” items like a proper forward, table of contents and index all need to be generated.  These tasks will all have to wait for the book to be finished.

 

The book is currently available in the following formats:

  • PDF
  • epub
  • mobi

 

The books are available for download here. (Authentication required)

 

If there is an additional format you would like to see it compiled for, please let me know.  Currently the book weights in a 77 pages.  As new tutorials are added, new compilations will be released.

, ,

23. June 2015

 

With the release of version 4.8 of Unreal Engine, playing audio actually became a great deal easier for 2D games with the addition of PlaySound2D.  In this section we are going to learn how to import and play audio files in Unreal Engine.  For the application controller I created a simple UI that fire off the playing of audio.  If unfamiliar with creating a UI with UMG ( Unreal Motion Graphics ), be sure to read the previous tutorial.

 

As always there is an HD video version of this tutorial available right here.

We are going to be creating a simple UI to fire off audio events:

image

 

We will simply wire each button to fire off our examples.  I also needed several audio samples.  I personally downloaded each one from freesound.org.

 

Importing Audio Files

 

First we need some audio to work with.  So then… what audio files work with Unreal Engine?  Mp3, mp4, ogg?  Nope… WAV.  You can import your sound files in whatever format you want, so long as it’s wav.  Don’t worry, this isn’t as big of a hindrance as it sounds, as Unreal simply takes care of the compression and conversion steps required for you.  So the fact your soundtrack is 10MB in size isn’t as damning as it seems, as Unreal will take care of the required conversions for you.  Being in an uncompressed source format enables Unreal to offer a lot of power as you will see shortly.  Also it neatly steps around a number of licensing concerns, such as the patent minefield that is mp3.  If you’re source files aren’t in wav format, you can easily convert using the freely available and completely awesome Audacity sound editor.

 

Your WAV files can be in PCM, ADPCM or DVI ADPCM format, although if using defaults you most likely don’t need to worry about this detail.  They should be 16 bit, little endian (again… generally don’t worry) uncompressed format at any bitrate. 22khz and 44.1khz are recommended however, with the later being the bit rate CD quality audio is encoded at.  Your audio files can be either mono (single channel) or stereo (dual channel), plus you can import up to 8 channels of audio ( generally 8 mono WAV files ) to encoded 7.1 surround sound.  This is way beyond the scope of what we will be covering but more details about 7.1 encoding can be found here.  Importing audio is as simple as using the Import button in the Content Browser, or simple drag and drop.

 

Once imported, you can double click your audio asset to bring up the editor.

image

 

Here you can set a number of properties including the compression amount, wether to loop, the pitch, even add subtitle information.  There isn’t anything we need to modify right now though.  I have imported a couple different mono format wav files, like so:

image

 

And created a simple button to play the audio when pressed:

image

 

Playing Sounds

 

Now let’s wire up the OnClick event to play Thunder.wav, with the following blueprint:

image

 

Yeah… that’s all you need to do, drop in a Play Sound 2D function, pick the Wave file to play and done.  Before 4.8 the only option was Play Sound at Location, which is virtually identical but required a Position component as well.  You can achieve the same effect this way:

image

 

Both Play Sound at Location and Play Sound 2D are fire and forget, in that you have no control over them after the sound has begun to play (other than at a global level, like muting all audio ).  Neither moves with the actor either.

 

What if you want the audio to come from or move with a node in the scene?  This is possible too.   First let’s create a Paper2D character to attach the audio component to.  This process was covered in this tutorial in case you need a refresher.  Don’t forget to create a GameMode as well and configure your newly created controller to be active.

 

Using the Audio Component

 

I created this hierarchy of a character:

image

Notice the Audio component I’ve added?  There are several properties that can be set in the Details panel for the audio component, but the most important is the sound.

image

I went ahead and attached my “music” Sound Wave.  You can set the music file to automatically play using the Activation property:

image

There is also an event available that will fire when your audio file has finished playing. 

image

Unlike PlaySound2D, this sound isn’t fire and forget.   It can also be changed dynamically using the following Blueprint:

image

This blueprint finds the Audio component of our Pawn and then set’s it’s Sound using a call to Play Sound Attached.  As you can see, there are several available properties to set and you can easily position the audio in the world.

 

As I mentioned earlier, you can also manipulate a running Sound wave when attached as an audio component, like so:

image

 

Paradoxically, there doesn’t actually seem to be a method to get the current volume.  The obvious solution is to keep the volume as a variable and pass it to Adjust Volume Level.

 

Sound Cues

So far we’ve only used directly imported Sound Wave files, but every location we used a Wave, we could have also used a Cue.  As you will see, Cues give you an enormous amount of control over your audio.

 

Start by creating a new Sound Cue object:

image

Name it then double click to bring up the Sound Que editor:

image

This is well beyond the scope of this tutorial, but you can essentially make complex sounds out of Sound nodes, like this simple graph mixing two sounds together:

image

 

Again, any of the earlier functions such as Play Sound 2D will take a Cue in place of a Wave.

 

We have only scratched the very surface of audio functionality built into Unreal Engine, but this should be more than enough to get you started in 2D.

 

The Video

Programming , , ,

22. June 2015

 

Export to web has been a popular bullet point on most game engine feature lists as of late.  Both Unreal Engine and Unity recently offered native HTML5 export (Unity has had a plugin option for years, but plugins have gone the way of the Dodo).  LibGDX has offered a HTML5 target for ages, Haxe based game engines can all cross compile to HTML or Flash.  GameMaker, Construct, Stencyl and more have HTML5 exporters.   Of course, there are a number of HTML5 native engines (I suppose Construct should be on that list actually…), but they aren’t really what this conversation is about.

 

At the end of the day though… is anyone actually doing it?  Is there a “shipped” title from any of these engines that people are playing in any quantity?  It is my understanding that the vast majority of commercial web and Facebook games are still Flash based, with HTML5 representing perhaps 10-20% of titles.  More importantly, most of these titles are relatively simple, nothing harnessing a fraction of the power of an Unreal or Unity type engine.

 

It leads me to question, is HTML5 a feature everybody wants but nobody uses?

 

What inspired this thought was this interesting article on Godot’s efforts to target the web.  I have been actively monitoring the game development world during the entire timespan Juan described and I witnessed all of these “next big thing” technologies that came and went.  Summarized from the article, they were:

  • Native Plugins
  • Google Native Client
  • Flash
  • ASM.js
  • WebAssembly

 

This is by no means a comprehensive list, LibGDX compiles from Java to JavaScript using Google’s GWT for example, and some of these technologies were certainly successful for a time such as Flash.  Of course too, games written and targeted for HTML5 exist.  It’s this whole HTML5 as an additional target approach that I am beginning to think is just a gimmick.  The funny part is, I love the idea too… I evaluate a lot of game engines and I always see “HTML5 export” and think “ooooh, that’s good”.

 

Games in the browser certainly seemed to have a brilliant future at one point.  The Unity web plugin really did bring Unity to the web.  I used a couple of Unity powered 3D tools, such as Mixamo, and it was very near to native in experience (they have since ported to HTML5 since the Unity plugin was end of life-d).  Google’s Native Client (NaCL) had some promise, with shipped titles such as AirMech leading the way, but a single browser solution was never going to fly.  Ultimately Flash, especially with it’s promising (at the time) 3D API had the most promise, but a combination of Apple’s malevolence and Adobe’s incompetence brought that to an inglorious end.

 

Perhaps it’s just ignorance on my end.  Can anyone out there point me at an HTML5 game (not a tech demo, an actual game) created in either Unity or Unreal Engine?  I realize both are fairly new, so I would also settle for examples that are under development but show promise.  Keep in mind here, I’m not talking about HTML5 game engines, there certainly is a future in HTML5 games and of course they can be wrapped and deployed to a variety of platforms.  No, it’s “HTML5 as an additional target platform” support that seems to be pure gimmick at this point.

Totally Off Topic

Month List

DisqusCommentsSummary