25. July 2015

 

In this chapter we are going to look at using audio in XNA.  Originally XNA supported one way of playing audio, using XACT (Cross Platform Audio Creation Tool ).  Since the initial release they added a much simplified API.  We will be taking a look at both processes.

 

There is an HD video of this chapter available here.

 

When playing audio there is always the challenge of what formats are supported, especially when you are dealing with multiple different platforms, all of which have different requirements.  Fortunately the content pipeline takes care of a great deal of the complications for us.  Simply add your audio files ( mp3, mp4, wma, wav, ogg ) to the content pipeline and it will do the rest of the work for you.   As you will see shortly though, it is also possible to load audio files outside of the content pipeline.  In this situation, be aware that certain platforms do not support certain formats ( for example, no wma support on Android or iOS, while iOS doesn’t support ogg but does support mp3 ).  Unless you have a good reason, I would recommend you stick to the content pipeline for audio whenever possible.

 

The Perils of MP3

Although MP3 is supported by MonoGame, you probably want to stay away from using it. Why?
Patents. If your game has over 5,000 users you could be legally required to purchase a license. From a legal perspective, Ogg Vorbis is superior in every single way. Unfortunately Ogg support is not as ubiquitous as we'd like it to be.

 

Adding Audio Content using the Content Pipeline

This process is virtually identical to adding a graphic file in your content file.

image

 

Simply add the content like you did using right click->Add Existing Items or the Edit menu:

image

 

If it is a supported format you will see the Processor field is filled ( otherwise it will display Unknown ).  The only option here is to configure the mp3 audio quality, a trade off between size and fidelity.

 

Playing a Song

Now let’s look at the code involved in playing the song we just added to our game.

// This example shows playing a song using the simplified audio api

using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Input;
using Microsoft.Xna.Framework.Media;

namespace Example1
{
    public class Game1 : Game
    {
        GraphicsDeviceManager graphics;
        SpriteBatch spriteBatch;
        Song song;

        public Game1()
        {
            graphics = new GraphicsDeviceManager(this);
            Content.RootDirectory = "Content";
        }

        protected override void Initialize()
        {
            base.Initialize();
        }

        protected override void LoadContent()
        {
            spriteBatch = new SpriteBatch(GraphicsDevice);

            this.song = Content.Load<Song>("prepare");
            MediaPlayer.Play(song);
            //  Uncomment the following line will also loop the song
            //  MediaPlayer.IsRepeating = true;
            MediaPlayer.MediaStateChanged += MediaPlayer_MediaStateChan
                                             ged;
        }

        void MediaPlayer_MediaStateChanged(object sender, System.
                                           EventArgs e)
        {
            // 0.0f is silent, 1.0f is full volume
            MediaPlayer.Volume -= 0.1f;
            MediaPlayer.Play(song);
        }

        protected override void Update(GameTime gameTime)
        {
            if (GamePad.GetState(PlayerIndex.One).Buttons.Back == 
                ButtonState.Pressed || Keyboard.GetState().IsKeyDown(
                Keys.Escape))
                Exit();

            base.Update(gameTime);
        }

        protected override void Draw(GameTime gameTime)
        {
            GraphicsDevice.Clear(Color.CornflowerBlue);
            base.Draw(gameTime);
        }
    }
}

 

Notice that we added the using statement Microsoft.Xna.Framework.Media.  We depend on this for the MediaPlayer and Song classes.  Our Song is loaded using the ContentManager just like we did earlier with Texture, this time with the type Song.  Once again the content loader does not use the file’s extension.  Our Song can then be played with a call to MediaPlayer.Play().  In this example we wire up a MediaStateChanged event handler that will be called when the song completes, decreasing the volume and playing the song again.

 

Playing Sound Effects

 

This example shows playing sound effects.  Unlike a Song, SoundEffects are designed to support multiple instances being played at once.  Let’s take a look at playing SoundEffect in MonoGame:

// Example showing playing sound effects using the simplified audio 
api
using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Input;
using Microsoft.Xna.Framework.Audio;
using System.Collections.Generic;

namespace Example2
{
    public class Game1 : Game
    {
        GraphicsDeviceManager graphics;
        SpriteBatch spriteBatch;
        List<SoundEffect> soundEffects;

        public Game1()
        {
            graphics = new GraphicsDeviceManager(this);
            Content.RootDirectory = "Content";
            soundEffects = new List<SoundEffect>();
        }

        protected override void Initialize()
        {
            base.Initialize();
        }

        protected override void LoadContent()
        {
            // Create a new SpriteBatch, which can be used to draw 
            textures.
            spriteBatch = new SpriteBatch(GraphicsDevice);

            soundEffects.Add(Content.Load<SoundEffect>("airlockclose"))
                             ;
            soundEffects.Add(Content.Load<SoundEffect>("ak47"));
            soundEffects.Add(Content.Load<SoundEffect>("icecream"));
            soundEffects.Add(Content.Load<SoundEffect>("sneeze"));

            // Fire and forget play
            soundEffects[0].Play();
            
            // Play that can be manipulated after the fact
            var instance = soundEffects[0].CreateInstance();
            instance.IsLooped = true;
            instance.Play();
        }


        protected override void Update(GameTime gameTime)
        {
            if (GamePad.GetState(PlayerIndex.One).Buttons.Back == 
                ButtonState.Pressed || Keyboard.GetState().IsKeyDown(
                Keys.Escape))
                Exit();

            if (Keyboard.GetState().IsKeyDown(Keys.D1))
                soundEffects[0].CreateInstance().Play();
            if (Keyboard.GetState().IsKeyDown(Keys.D2))
                soundEffects[1].CreateInstance().Play();
            if (Keyboard.GetState().IsKeyDown(Keys.D3))
                soundEffects[2].CreateInstance().Play();
            if (Keyboard.GetState().IsKeyDown(Keys.D4))
                soundEffects[3].CreateInstance().Play();


            if (Keyboard.GetState().IsKeyDown(Keys.Space))
            {
                if (SoundEffect.MasterVolume == 0.0f)
                    SoundEffect.MasterVolume = 1.0f;
                else
                    SoundEffect.MasterVolume = 0.0f;
            }
            base.Update(gameTime);
        }

        protected override void Draw(GameTime gameTime)
        {
            GraphicsDevice.Clear(Color.CornflowerBlue);

            base.Draw(gameTime);
        }
    }
}

 

Note the using Microsoft.Xna.Framework.Audio statement at the beginning.  Once again we added our audio files using the Content Pipeline, in this case I added several WAV files.  They are loaded using Content.Load() this time with the type SoundEffect.  Next it is important to note the two different ways the SoundEffects are played.  You can either call Play() directly on the SoundEffect class.  This creates a fire and forget instance of the class with minimal options for controlling it.  If you have need for greater control ( such as changing the volume, looping or applying effects ) you should instead create a SoundEffectInstance using the SoundEffect.CreateInstance() call.  You should also create a separate instance if you want to have multiple concurrent instances of the same sound effect playing.  It is important to realize that all instances of the same SoundEffect share resources, so memory will not increase massively for each instance created.  The number of simultaneous supported sounds varies from platform to platform, with 64 being the limit on Windows Phone 8, while the Xbox 360 limits it to 300 instances.  There is no hard limit on the PC, although you will obviously hit device limitations quickly enough.

 

In the above example, we create a single looping sound effect right away.  Then each frame we check to see if the user presses 1,2,3 or 4 and play an instance of the corresponding sound effect.  If the user hits the spacebar we either mute or set to full volume the global MasterVolume of the SoundEffect class.  This will affect all playing sound effects.

 

Positional Audio Playback

Sound effects can also be positioned in 3D space easily in XNA. 

// Display positional audio

using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Input;
using Microsoft.Xna.Framework.Audio;

namespace Example3
{
    public class Game1 : Game
    {
        GraphicsDeviceManager graphics;
        SpriteBatch spriteBatch;
        SoundEffect soundEffect;
        SoundEffectInstance instance;
        AudioListener listener;
        AudioEmitter emitter;


        public Game1()
        {
            graphics = new GraphicsDeviceManager(this);
            Content.RootDirectory = "Content";
        }

        protected override void Initialize()
        {
            base.Initialize();
        }

        protected override void LoadContent()
        {
            spriteBatch = new SpriteBatch(GraphicsDevice);
            
            soundEffect = this.Content.Load<SoundEffect>("circus");
            instance = soundEffect.CreateInstance();
            instance.IsLooped = true;

            listener = new AudioListener();
            emitter = new AudioEmitter();

            // WARNING!!!!  Apply3D requires sound effect be Mono!  
            Stereo will throw exception
            instance.Apply3D(listener, emitter);
            instance.Play();
        }

        protected override void Update(GameTime gameTime)
        {
            if (GamePad.GetState(PlayerIndex.One).Buttons.Back == 
                ButtonState.Pressed || Keyboard.GetState().IsKeyDown(
                Keys.Escape))
                Exit();

            if (Keyboard.GetState().IsKeyDown(Keys.Left))
            {
                listener.Position = new Vector3(listener.Position.X-0.
                                    1f, listener.Position.Y, listener.
                                    Position.Z);
                instance.Apply3D(listener, emitter);
            }
            if (Keyboard.GetState().IsKeyDown(Keys.Right))
            {
                listener.Position = new Vector3(listener.Position.X + 
                                    0.1f, listener.Position.Y, 
                                    listener.Position.Z);
                instance.Apply3D(listener, emitter);
            }

            if (Keyboard.GetState().IsKeyDown(Keys.Up))
            {
                listener.Position = new Vector3(listener.Position.X, 
                                    listener.Position.Y +0.1f, 
                                    listener.Position.Z);
                instance.Apply3D(listener, emitter);
            }
            if (Keyboard.GetState().IsKeyDown(Keys.Down))
            {
                listener.Position = new Vector3(listener.Position.X, 
                                    listener.Position.Y -0.1f, 
                                    listener.Position.Z);
                instance.Apply3D(listener, emitter);
            }            
            base.Update(gameTime);
        }

        protected override void Draw(GameTime gameTime)
        {
            GraphicsDevice.Clear(Color.CornflowerBlue);
            base.Draw(gameTime);
        }
    }
}

 

In this example, we load a single SoundEffect and start it looping infinitely.  We then create an AudioListener and AudioEmitter instance.  The AudioListener represents the location of your ear within the virtual world, while the AudioEmitter represents the position of the sound effect.  The default location of both is a Vector3 at (0,0,0).  You set the position of a SoundEffect by calling Apply3D().  In our Update() call, if the user hits an arrow key we updated the Position of the AudioListener accordingly.  After changing the position of a sound you have to call Apply3D again.  As you hit the arrow keys you will notice the audio pans and changes volume to correspond with the updated position.  It is very important that your source audio file is in Mono ( as opposed to Stereo ) format if you use Apply3D, or an exception will be thrown.

 

Using XACT

As mentioned earlier, XACT used to be the only option when it came to audio programming in XNA.  XACT is still available and it enables your audio designer to have advanced control over the music and sound effects that appear in your game, while the programmer uses a simple programmatic interface.  One big caveat is XACT is part of the XNA installer or part of the Direct X SDK as is not available on Mac OS or Linux.  If you wish to install it but do not have an old version of Visual Studio installed, instructions can be found here ( http://www.gamefromscratch.com/post/2015/07/23/Installing-XNA-Tools-Like-XACT-without-Visual-Studio-2010.aspx ).  If you are on MacOS or Linux, you want to stick to the simplified audio API that we demonstrated earlier.

Xact is installed as part of the XNA Studio install, on 64bit Windows by default the Xact executable will be located in C:\Program Files (x86)\Microsoft XNA\XNA Game Studio\v4.0\Tools.  Start by running AudConsole3.exe:

image

 

The XACT Auditioning Tool needs to be running when you run the Xact tool.

Then launch Xact3.exe

image

First create a new project:

image

 

Next right click Wave Banks and select New Wave Bank

image

 

Drag and drop your source audio files into the Wave Bank window:

image

 

Now create a new Sound Bank by right clicking Sound Bank and selecting New Wave Bank

image

 

Now drag the Wave you wish to use from the Wave Bank to the Sound Bank

a1

 

Now create a Cue by dragging and dropping the Sound Bank to the Cue window.  Multiple files can be added to a cue if desired.

a2

 

You can rename the Cue, set the probability to play if you set several Sounds in the Cue and change the instance properties of the Cue in the properties window to your left:

image

Now Build the results:

image

 

This will then create two directories in the folder you created your project in:

image

 

These files need to be added directly to your project, you do not use the content pipeline tool!  Simply copy all three files to the content folder and set it’s build action to Copy.

image

 

Now let’s look at the code required to use these generated files:

using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Input;
using Microsoft.Xna.Framework.Audio;

namespace Example4
{
    public class Game1 : Game
    {
        GraphicsDeviceManager graphics;
        SpriteBatch spriteBatch;
        AudioEngine audioEngine;
        SoundBank soundBank;
        WaveBank waveBank;

        public Game1()
        {
            graphics = new GraphicsDeviceManager(this);
            Content.RootDirectory = "Content";
        }

        protected override void Initialize()
        {
            base.Initialize();
        }
        protected override void LoadContent()
        {
            // Create a new SpriteBatch, which can be used to draw 
            textures.
            spriteBatch = new SpriteBatch(GraphicsDevice);

            audioEngine = new AudioEngine("Content/test.xgs");
            soundBank = new SoundBank(audioEngine,"Content/Sound Bank.
                        xsb");
            waveBank = new WaveBank(audioEngine,"Content/Wave Bank.
                       xwb");

            soundBank.GetCue("ak47").Play();
        }

        protected override void Update(GameTime gameTime)
        {
            if (GamePad.GetState(PlayerIndex.One).Buttons.Back == 
                ButtonState.Pressed || Keyboard.GetState().IsKeyDown(
                Keys.Escape))
                Exit();

            // TODO: Add your update logic here

            base.Update(gameTime);
        }

        protected override void Draw(GameTime gameTime)
        {
            GraphicsDevice.Clear(Color.CornflowerBlue);

            // TODO: Add your drawing code here

            base.Draw(gameTime);
        }
    }
}

 

First you create an AudioEngine using the xgs file, then a SoundBank using the xsb and a WaveBank unsing the xwb file.  We then play the Cue we created earlier with a call to SoundBank.GetQue().Play().  This process allows the audio details to be configured outside of the game while the programmer simply uses the created Que.

 

Finally it is possible to play audio files that weren’t added using the content pipeline or using Xact using a Uri. 

        protected override void LoadContent()
        {
            // Create a new SpriteBatch, which can be used to draw 
            textures.
            spriteBatch = new SpriteBatch(GraphicsDevice);

            // URL MUST be relative in MonoGame
            System.Uri uri = new System.Uri("content/background.mp3",
                             System.UriKind.Relative);
            Song song = Song.FromUri("mySong", uri);
            MediaPlayer.Play(song);
            MediaPlayer.ActiveSongChanged += (s, e) => {
                song.Dispose();
                System.Diagnostics.Debug.WriteLine("Song ended and 
                                                   disposed");
            };
        }

 

First you create a Uri that locates the audio file you want to load.  We then load it using the method FromUri, passing in a name as well as the uri.  One very important thing to be aware of here, on XNA you could use any URI.  In MonoGame it needs to be a relative path.

 

The Video

 

Programming , , ,

23. July 2015

 

I recently ran into a bit of a challenge and the work around wasn’t entirely obvious so I’ve decided to share the process here.  The XNA Game Studio install includes a couple of tools, the XACT audio tool being specifically what I was after.  Unfortunately to install XNA you need to first have Visual Studio 2010 or Visual Studio 2010 Express installed.  As that version of VS is getting increasingly dated, this is going to be an issue for many.  Fortunately there is a work around.

 

First download the XNA installer here.  The file is called XNAGS40_setup.exe

 

Now open a command prompt ( possibly with admin privledges ) and CD to directory containing the file you downloaded.

Run the command:

XNAGS40_setup.exe /x

You will now be prompted where to extract:

image

Click OK

This will create a couple files, the most important being redists.msi, run this file ( just type redists.msi and [enter] at the command line, or double click in Explorer ).

 

This will in turn create a directory structure in Program Files ( or Program Files x86 on 64bit Windows ) called Microsoft XNA.

Close the command prompt and navigate to that folder in Windows Explorer then open XNA Game Studio\v4.0\setup:

image

Run xnags_shared.msi then xnags_platform_tools.msi, both are simple installers,  take default options if asked.

Now if you check the folder XNA Game Studio/v4.0 you should see that all of the tools you need have been installed in the Tools directory:

image

Programming, General , ,

28. June 2015

 

In this chapter we are going to explore handling input from the keyboard, mouse and gamepad in your MonoGame game.  XNA/MonoGame also have support for mobile specific input such as motion and touch screens, we will cover theses topics in a later topic.

 

There is an HD video of this chapter available here.

 

XNA input capabilities were at once powerful, straightforward and a bit lacking.  If you come from another game engine or library you may be shocked to discovered there is no event driven interface out of the box for example.  All input in XNA is done via polling, if you want an event layer, you build it yourself or use one of the existing 3rd party implementations.  On the other hand, as you are about to see, the provided interfaces are incredibly consistent and easy to learn.

 

Handling Keyboard Input

 

Let’s start straight away with a code sample:

using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Input;
using System.Text;

namespace Example1
{
    public class Game1 : Game
    {
        GraphicsDeviceManager graphics;
        SpriteBatch spriteBatch;
        Vector2 position;
        Texture2D texture;

        public Game1()
        {
            graphics = new GraphicsDeviceManager(this);
            Content.RootDirectory = "Content";
        }

        protected override void Initialize()
        {
            base.Initialize();
            position = new Vector2(graphics.GraphicsDevice.Viewport.
                       Width / 2 -64, 
                                    graphics.GraphicsDevice.Viewport.
                                    Height / 2 -64);
        }

        protected override void LoadContent()
        {
            spriteBatch = new SpriteBatch(GraphicsDevice);
            texture = this.Content.Load<Texture2D>("logo128");
        }

        protected override void UnloadContent()
        {
        }

        protected override void Update(GameTime gameTime)
        {
            // Poll for current keyboard state
            KeyboardState state = Keyboard.GetState();
            
            // If they hit esc, exit
            if (state.IsKeyDown(Keys.Escape))
                Exit();

            // Print to debug console currently pressed keys
            System.Text.StringBuilder sb = new StringBuilder();
            foreach (var key in state.GetPressedKeys())
                sb.Append("Key: ").Append(key).Append(" pressed ");

            if (sb.Length > 0)
                System.Diagnostics.Debug.WriteLine(sb.ToString());
            else
                System.Diagnostics.Debug.WriteLine("No Keys pressed");
            
            // Move our sprite based on arrow keys being pressed:
            if (state.IsKeyDown(Keys.Right))
                position.X += 10;
            if (state.IsKeyDown(Keys.Left))
                position.X -= 10;
            if (state.IsKeyDown(Keys.Up))
                position.Y -= 10;
            if (state.IsKeyDown(Keys.Down))
                position.Y += 10;

            base.Update(gameTime);
        }

        protected override void Draw(GameTime gameTime)
        {
            GraphicsDevice.Clear(Color.CornflowerBlue);

            spriteBatch.Begin();
            spriteBatch.Draw(texture, position);
            spriteBatch.End();
            base.Draw(gameTime);
        }
    }
}

 

We are going to re-use the same basic example for all the examples in this chapter.  It simply draws a sprite centered to the screen, then we manipulate the position in Update()

image

 

In this particular example, when the user hits keys, they are logged to the debug console:

image

 

Now let’s take a look at the keyboard specific code.  It all starts with a call to Keyboard.GetState(), this returns a struct containing the current state of the keyboard, including modifier keys like Control or Shift.  It also contains a method named GetPressedKeys() which returns an array of all the keys that are currently pressed.  In this example we simply loop through the pressed keys, writing them out to debug.  Finally we poll the pressed state of the arrow keys and move our position accordingly.

 

Handling Key State Changes

One thing you might notice with XNA is you are simply checking the current state of a key.  So if a key is pressed or not.  What if you only want to respond when the key is first pressed?  This requires a bit of work on your behalf.

    public class Game1 : Game
    {
        GraphicsDeviceManager graphics;
        SpriteBatch spriteBatch;
        Vector2 position;
        Texture2D texture;
        KeyboardState previousState;

        public Game1()
        {
            graphics = new GraphicsDeviceManager(this);
            Content.RootDirectory = "Content";
        }

        protected override void Initialize()
        {
            base.Initialize();
            position = new Vector2(graphics.GraphicsDevice.Viewport.
                       Width / 2 -64, 
                                    graphics.GraphicsDevice.Viewport.
                                    Height / 2 -64);

            previousState = Keyboard.GetState();
        }

        protected override void LoadContent()
        {
            spriteBatch = new SpriteBatch(GraphicsDevice);
            texture = this.Content.Load<Texture2D>("logo128");
        }

        protected override void Update(GameTime gameTime)
        {
            KeyboardState state = Keyboard.GetState();
            
            // If they hit esc, exit
            if (state.IsKeyDown(Keys.Escape))
                Exit();

            // Move our sprite based on arrow keys being pressed:
            if (state.IsKeyDown(Keys.Right) & !previousState.IsKeyDown(
                Keys.Right))
                position.X += 10;
            if (state.IsKeyDown(Keys.Left) & !previousState.IsKeyDown(
                Keys.Left))
                position.X -= 10;
            if (state.IsKeyDown(Keys.Up))
                position.Y -= 10;
            if (state.IsKeyDown(Keys.Down))
                position.Y += 10;

            base.Update(gameTime);

            previousState = state;
        }

        protected override void Draw(GameTime gameTime)
        {
            GraphicsDevice.Clear(Color.CornflowerBlue);

            spriteBatch.Begin();
            spriteBatch.Draw(texture, position);
            spriteBatch.End();
            base.Draw(gameTime);
        }
    }

 

The changes to our code are highlighted.  Essentially if you want to check for changes in input state ( this applies to gamepad and mouse events too ), you need to track them yourself.  This is a matter of keeping a copy of the previous state, then in your input check you check not only if a key is pressed, but also if it was pressed in the previous state.  If it isn’t this is a new key press and we respond accordingly.  In the above example on Left or Right arrow presses, we only respond to new key presses, so moving left or right requires repeatedly hitting and releasing the arrow key.

 

Handling Mouse Input

Next we explore the process of handling Mouse input.  You will notice the process is almost identical to keyboard handling.  Once again, let’s jump right in with a code example.

using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Input;
using System.Text;

namespace Example2
{
    public class Game1 : Game
    {
        GraphicsDeviceManager graphics;
        SpriteBatch spriteBatch;
        Vector2 position;
        Texture2D texture;

        public Game1()
        {
            graphics = new GraphicsDeviceManager(this);
            Content.RootDirectory = "Content";
        }

        protected override void Initialize()
        {
            base.Initialize();
            position = new Vector2(graphics.GraphicsDevice.Viewport.
                       Width / 2 ,
                                    graphics.GraphicsDevice.Viewport.
                                    Height / 2 );

            
        }

        protected override void LoadContent()
        {
            spriteBatch = new SpriteBatch(GraphicsDevice);
            texture = this.Content.Load<Texture2D>("logo128");
        }

        protected override void Update(GameTime gameTime)
        {
            MouseState state = Mouse.GetState();

            // Update our sprites position to the current cursor 
            location
            position.X = state.X;
            position.Y = state.Y;

            // Check if Right Mouse Button pressed, if so, exit
            if (state.RightButton == ButtonState.Pressed)
                Exit();

            base.Update(gameTime);
        }

        protected override void Draw(GameTime gameTime)
        {
            GraphicsDevice.Clear(Color.CornflowerBlue);

            spriteBatch.Begin();
            spriteBatch.Draw(texture, position, origin:new Vector2(64,
                             64));
            spriteBatch.End();
            base.Draw(gameTime);
        }
    }
}

 

When you run this example, the texture will move around relative to the location of the mouse.  When the user clicks right, the application exits.  The logic works almost identically to handling Keyboard input.  Each frame you check the MouseState by calling Mouse.GetState().  This MouseState struct contains the current mouse X and Y as well as the status of the left, right and middle mouse buttons and the scroll wheel position.  You may notice there are also values for XButton1 and XButton2, these buttons can change from device to device, but generally represent a forward and back navigation button.  On devices with no mouse support, X and Y will always be 0 while each button state will always be set to ButtonState.Released. If you are dealing with a multi touch device this code will continue to work, although the values will only reflect the primary (first) touch point.  We will discuss mobile input in more detail in a later chapter.  As with handling Keyboard events, if you want to track changes in event state, you will have to track them yourself.

 

If you add the following code to your update, you will notice some interesting things about the X,Y position of the mouse:

        protected override void Update(GameTime gameTime)
        {
            MouseState state = Mouse.GetState();

            // Update our sprites position to the current cursor 
            location
            position.X = state.X;
            position.Y = state.Y;

            System.Diagnostics.Debug.WriteLine(position.X.ToString() + 
                                   "," + position.Y.ToString());
            // Check if Right Mouse Button pressed, if so, exit
            if (state.RightButton == ButtonState.Pressed)
                Exit();

            base.Update(gameTime);
        }

The X and Y values are relative to the Window’s origin.  That is (0,0) is the top left corner of the drawable portion of the window, while (width,height) is the bottom right corner.  However, if you are in a Windowed app, the mouse pointer location continues to be updated, still relative to the top left corner of the application window.

image

 

You can also set the position of the cursor in code using the following line:

            if(state.MiddleButton == ButtonState.Pressed)
                Mouse.SetPosition(graphics.GraphicsDevice.Viewport.
                                  Width / 2,
                    graphics.GraphicsDevice.Viewport.Height / 2);

 

This code will center the mouse position to the middle of the screen when the user presses the middle button.

 

Finally its common to want to display the mouse cursor, this is easily accomplished using:

            IsMouseVisible = true;

This member of the Game class toggles the visibility of the system mouse cursor.

image

 

 

Handling Gamepad Input

 

Now we will look at handling input from a gamepad or joystick controller.  You probably wont be surprised to discover the process is remarkably consistent.

using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Input;
using System.Text;

namespace Example3
{
    public class Game1 : Game
    {
        GraphicsDeviceManager graphics;
        SpriteBatch spriteBatch;
        Vector2 position;
        Texture2D texture;

        public Game1()
        {
            graphics = new GraphicsDeviceManager(this);
            Content.RootDirectory = "Content";
        }

        protected override void Initialize()
        {
            base.Initialize();
            position = new Vector2(graphics.GraphicsDevice.Viewport.
                       Width / 2,
                                    graphics.GraphicsDevice.Viewport.
                                    Height / 2);
        }

        protected override void LoadContent()
        {
            spriteBatch = new SpriteBatch(GraphicsDevice);
            texture = this.Content.Load<Texture2D>("logo128");
        }

        protected override void Update(GameTime gameTime)
        {
            if (Keyboard.GetState().IsKeyDown(Keys.Escape)) Exit();

            // Check the device for Player One
            GamePadCapabilities capabilities = GamePad.GetCapabilities(
                                               PlayerIndex.One);
            
            // If there a controller attached, handle it
            if (capabilities.IsConnected)
            {
                // Get the current state of Controller1
                GamePadState state = GamePad.GetState(PlayerIndex.One);

                // You can check explicitly if a gamepad has support 
                for a certain feature
                if (capabilities.HasLeftXThumbStick)
                {
                    // Check teh direction in X axis of left analog 
                    stick
                    if (state.ThumbSticks.Left.X < -0.5f) 
                        position.X -= 10.0f;
                    if (state.ThumbSticks.Left.X > 0.5f) 
                        position.X += 10.0f;
                }

                // You can also check the controllers "type"
                if (capabilities.GamePadType == GamePadType.GamePad)
                {
                    if (state.IsButtonDown(Buttons.A))
                        Exit();
                }
            }
            base.Update(gameTime);
        }

        protected override void Draw(GameTime gameTime)
        {
            GraphicsDevice.Clear(Color.CornflowerBlue);

            spriteBatch.Begin();
            spriteBatch.Draw(texture, position, origin: new Vector2(64,
                             64));
            spriteBatch.End();
            base.Draw(gameTime);
        }
    }
}

 

When you run this example, if there is a controller attached, pressing left or right on the analog stick with move the sprite accordingly.  Hitting the A button (or pressing Escape for those without a controller) will cause the game to exit.

 

The logic here is remarkably consistent with Mouse and Keyboard event handling.  The primary difference is the number of controllers attached and the capabilities of each controller can vary massively, so the code needs to responding appropriately.  In the above example, we check only for the first controller attached, by passing PlayerIndex to our GetEvents call.  You can have up to 4 controllers attached, and each needs to be polled separately. 

 

Supported Gamepads


On the PC there are a plethora of devices available with a wide range of capabilities. You can have up to 4 different controllers attached, each accesible by passing the appropriate PlayerIndex to GetEvents(). The following device types can be returned:

  • AlternateGuitar
  • ArcadeStick
  • BigButtonPad
  • DancePad
  • DrumKit
  • FlightStick
  • GamePad
  • Guitar
  • Unknown
  • Wheel


Obviously each devices supports a different set of features, which can be polled invdividually using the GamePadCapabilities struct returned by Gamepad.GetCapabilities().


Buttons on a GamePad controller are treated just like Keys and MouseButtons, with a value of Pressed or Released.  Once again, if you want to track changes in state you need to code this functionality yourself.  When dealing with Analog sticks, the value returned is a Vector2 representing the current position of the stick.  A value of 1.0 represents a stick that is fully up or right, while a value of –1.0f represents a stick that is full left or down.  A stick at 0,0 is un-pressed.

 

There is a small challenge with dealing with analog controls however that you should be aware of.  Even when a stick is not pressed, it is almost never at the position (0.0f,0.0f), the sensors often return very small fluctuations from complete zero.  This means if you respond directly to input without taking into account these small variations, your sprites will “twitch” while they are supposed to be stationary.  This is worked around using something called a dead zone.  This is a range of motions, or motion values, that are considered too small to be registered.  You can think of a deadzone as a value that is “close enough to zero to be considered zero”.

 

You have a couple options with XNA/MonoGame for dealing with deadzones.  The default is IndependentAxis, which compares each axis against the deadzone separately, Circular which combines the X and Y values together before comparison to the dead zone (recommended for controls the use both axis together, such as a thumbstick controlling 3D view), and finally None, which ignores the deadzone completely.  You would generally choose None if you don’t care about a dead zone, or wish to implement it yourself.

           GamePadState state = GamePad.GetState(PlayerIndex.One, 
                                GamePadDeadZone.Circular);

 

As you can see, XNA's Input handling is somewhat sparse compared to other game engines, but does provide the building blocks to make more complex systems if required. The approach to handling input across devices is remarkably consistent, making it easier to use and hopefully resulting in less unexpected behavior and bugs.

 

 

The Video

Programming , , ,

24. June 2015

 

The MonoGame tutorial series has been written from day one with the intention of being compiled into book format.  As a thank you to GameFromScratch.com Patreon backers WIP copies ( as well as finishedMonogamebook books ) are available for download.

 

 

This represent the first compilation of Cross Platform Game Development with MonoGame and contains all of the tutorial series to date:

  • An Introduction and Brief History of XNA and Mo
    noGame
  • Getting Started with MonoGame on Windows
  • Getting Started with MonoGame on MacOS
  • Creating an Application
  • Textures and SpriteBatch

 

These represent early drafts, so the formatting isn’t final, there needs to be a thorough proof reading, some images need to be regenerated and of course “book” items like a proper forward, table of contents and index all need to be generated.  These tasks will all have to wait for the book to be finished.

 

The book is currently available in the following formats:

  • PDF
  • epub
  • mobi

 

The books are available for download here. (Authentication required)

 

If there is an additional format you would like to see it compiled for, please let me know.  Currently the book weights in a 77 pages.  As new tutorials are added, new compilations will be released.

, ,

19. June 2015

 

Now we move on to a topic that people always seem to love, graphics!  In the past few chapters/videos I’ve said over and over “don’t worry, we will cover this later”, well… welcome to later. We are primarily going to focus on loading and displaying textures using a SpriteBatch.  As you will quickly discover, this is a more complex subject than it sounds.

 

As always, there is an HD video of the content available here

Before we can proceed too far we need a texture to draw.  A texture can generally be thought of as a 2D image stored in memory.  The source image of a texture can be in bmp, dds, dib, hdr, jpg, pfm, png, ppm or tga formats.  In the “real world” that generally means bmp, jpg or png formats and there is something to be aware of right away.  Of those three formats, only png and some jpgs have an alpha channel, meaning it supports transparency out of the box.  There are however ways to represent transparency in these other formats, as we will see shortly.  If you’ve got no idea which format to pick, or why, pick png.

 

 

Using the Content Pipeline

If you’ve been reading since the beginning you’ve already seen a bit of the content pipeline, but now we are going to actually see it in action with a real world example.  

Do we have to use the content pipeline for images?


I should make it clear, you can load images that haven’t been converted into xnb format. As of XNA 4, a simpler image loading api was added that allowed you to load gif, jpg and png files directly with the ability to crop, scale and save. The content pipeline does a lot for you though, including massaging your texture into a platform friendly format, potentially compressing your image, generation of mip maps or power of two textures, pre-multiplied alpha (explained shortly ), optimized loading and more. MonoGame included a number of methods for directly loading content to make up for it’s lack of a working cross platform pipeline. With the release of the content pipeline tool, these methods are deprecated. Simply put, for game assets ( aka, not screen shots, dynamic images, etc ), you should use the content pipeline.

Create a new project, then in the Contents folder, double click the file Content.mgcb.

image

 

This will open the MonoGame Content Pipeline tool.  Let’s add our texture file, simple select Edit->Add->Existing Item...

image

Navigate to a select a compatible image file.  When prompted chose the mode that makes the most sense.  I want the original to be untouched, so I am choosing Copy the file to the directory.

image

 

Your content project should now look like:

image

The default import settings for our image are fine, but we need to set the Content build platform.  Select Content in the dialog pictured above, then under Platform select the platform you need to build for.

image

Note the two options for Windows, Windows and WindowsGL.  The Windows platform uses a DirectX backend for rendering, while WindowsGL uses OpenGL.  This does have an effect on how content is processed so the difference is important. 

Now select Build->Build, saving when prompted:

image

 

You should get a message that your content was built.

image

We are now finished importing, return to your IDE.

Important Platform Specific Information


One Windows the .mgcb file is all that you need. When the IDE encounters it, it will basically treat it as a symlink and instead refer to the contents it contains. Currently when building on MacOS using Xamarin, you have to manually copy the generated XNB contents into your project and set their build type as Content. The generated files are located in the Output Folder as configured in the Content Pipeline. I have been notified that a fix for this is currently underway, so hopefully the Mac and Windows development experience will be identical soon.
 
Alright, we now have an image to work with, let’s jump into some code.
 
 
 

Loading and displaying a Texture2D

So now we are going to load the texture we just added to the content project, and display it on screen.  Let’s just jump straight into the code.

using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Input;

namespace Example1
{
    public class Game1 : Game
    {
        GraphicsDeviceManager graphics;
        SpriteBatch spriteBatch;
        Texture2D texture;

        public Game1()
        {
            graphics = new GraphicsDeviceManager(this);
            Content.RootDirectory = "Content";
        }

        protected override void Initialize()
        {
            base.Initialize();
        }

        protected override void LoadContent()
        {
            spriteBatch = new SpriteBatch(GraphicsDevice);
            texture = this.Content.Load<Texture2D>("logo");
        }

        protected override void UnloadContent()
        {
            //texture.Dispose(); <-- Only directly loaded
            Content.Unload();
        }

        protected override void Update(GameTime gameTime)
        {
            if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.
                Pressed || Keyboard.GetState().IsKeyDown(Keys.Escape))
                Exit();
            base.Update(gameTime);
        }

        protected override void Draw(GameTime gameTime)
        {
            GraphicsDevice.Clear(Color.CornflowerBlue);

            spriteBatch.Begin();
            spriteBatch.Draw(texture,Vector2.Zero);
            spriteBatch.End();

            base.Draw(gameTime);
        }
    }
}
 
When we run this code we see:
 
image
 
 
Obviously your image will vary from mine, but our texture is drawn on screen at the position (0,0).
 
There are a few key things to notice here.  First we added a Texture2D to our class, which is essentially the in memory container for our texture image.  In LoadContent() we then load our image into our texture using the call:
 
texture = this.Content.Load<Texture2D>("logo");
 
You notice we use our Game's Content member here.  This is an instance of Microsoft.Xna.Framework.ContentManager and it is ultimately responsible for loading binary assets from the content pipeline.  The primary method is the Load() generic method which takes a single parameter, the name of the asset to load minus the extension.  Notice the bold there?  That’s because this is a very common tripping point.  In addition to Texture2D, Load() supports the following types:
  • Effect
  • Model
  • SpriteFont
  • Texture
  • Texture2D
  • TextureCube

It is possible to extend the processor to support additional types, but it is beyond the scope of what we are covering here today.

Next we get to the UnloadContent method, where we simply call Content.Unload();  The ContentManager “owns” all of the content it loads, so this cleans up all of the memory for all of the objects loaded through the ContentManager.  Notice I left a commented out example calling Dispose().  It’s important to know if you load a texture outside of the ContentManager or create one dynamically, it’s is your responsibility to dispose of it or you may leak memory.  You may say, hey, this will all get cleaned up on program exit anyways.  Honestly this isn’t technically wrong, although cleaning up after yourself is certainly a good habit to get into. 

 

Memory Leaks in C#?


Many new to C# developers think because it's managed you can't leak memory. This simply isn't true. While compared to languages like C++, memory management is much simpler in C#, it is still quite possible to have memory leaks. In C# the easiest way is to not Dispose() of classes that implement IDisposable. An object that implements IDisposable owns an unmanaged resource (such as a Texture) and that memory will be leaked if someone doesn't call the Dispose() method. Wrapping the allocation in a using statement will result in Dispose() being called at the end of scope. As a point of trivia, other common C# memory leaks are caused by not removing event listeners and of course, calling leaky native code (pInvoke).
 
Now that we have our texture loaded, its time to display it on screen.  This is done with the following code:
    spriteBatch.Begin();
    spriteBatch.Draw(texture,Vector2.Zero);
    spriteBatch.End();

I will explain the SpriteBatch in a few moments, so let’s instead focus on the Draw() call.  This needs to be called within a Begin()/End() pair.  Let’s just say SpriteBatch.Draw() has A LOT of overloads, that we will look at now.  In this example we simply Draw the passed in texture at the passed in position (0,0).  Next let’s look at a few of the options we have when calling Draw().

Where is 0,0?


Different libraries, frameworks and engines have different coordinate systems. In XNA, like most windowing or UI libraries, the position (0,0) refers to the top left corner of the screen. For sprites, (0,0) refers to the top left corner as well, although this can be changed in code. In many OpenGL based game engines, (0,0) is located at the bottom left corner of the screen. This distinction becomes especially important when you start working with 3rd party libraries like Box2D, which may have a different coordinate system. Using a top left origin system has advantages when dealing with UI, as your existing OS mouse and pixel coordinates are the same as your game's. However the OpenGL approach is more consistent with mathematics, where positive X and Y coordinate values refer to the top right quadrant on a Cartesian plane. Both are valid options, work equally well, just require some brain power to convert between.

 

Translation and Scaling

spriteBatch.Draw(texture, destinationRectangle: new Rectangle(50, 50, 300, 300));
 
This will draw our sprite at the position (50,50) and scaled to a width of 300 and a height of 300.

image

 

Rotated

spriteBatch.Draw(texture, 
    destinationRectangle: new Rectangle(50, 50, 300, 300),
    rotation:-45f
    );

This will rotate the image –45degrees about it’s origin.

image

 

Notice that the rotation was performed relative to the top left corner of the texture.  Quite commonly when rotating and scaling you would rather do it about the sprites mid point.  This is where the origin value comes in.

 

Rotated about the Origin

spriteBatch.Draw(texture,
    destinationRectangle: new Rectangle(150 + 50,150 + 50, 300, 300),
    origin:new Vector2(texture.Width/2,texture.Height/2),
    rotation:-45f
    );

Ok, this one may require a bit of explanation.  The origin is now the midpoint of our texture, however we are going to be translating and scaling relative to our midpoint as well, not the top left.  This means the coordinates passed into our Rectangle need to take this into account if we wish to remained centered.  Also you need to keep in mind that you are resizing the texture as part of the draw call.  This code results in:

image

 

For a bit of clarity, if we hadn’t translated(moved) the above, instead used this code:

spriteBatch.Draw(texture,
    destinationRectangle: new Rectangle(0, 0, 300, 300),
    origin:new Vector2(texture.Width/2,texture.Height/2),
    rotation:-45f
    );
 
We would rotate centered to our sprite, but at the origin of our screen

image

 

So it’s important to consider how the various parameters passed to draw interact with each other!

 

Tinted

spriteBatch.Begin();
spriteBatch.Draw(texture, 
    destinationRectangle: new Rectangle(50, 50, 300, 300),
    color:Color.Red);
spriteBatch.End();
 
image
 
The Color passed in ( in this case Red ) was then added to every pixel in the texture. Notice how it only effects the texture, the Cornflower Blue background is unaffected.  The additive nature of adding red to blue resulted in a black-ish colour, while white pixels simply became red.
 
 
 

Flippped

spriteBatch.Draw(texture, 
    destinationRectangle: new Rectangle(50, 50, 300, 300),
    effects:SpriteEffects.FlipHorizontally|SpriteEffects.FlipVertically
    );

That's about it for draw, now let’s look a bit closer at SpriteBatch.

 

SpriteBatch

 

In order to understand exactly what SpriteBatch does, it’s important to understand how XNA does 2D.  At the end of the day, with modern GPUs, 2D game renderers no longer really exist.  Instead the renderer is actually still working in 3D and faking 2D.  This is done by using an orthographic camera ( explained later, don’t worry ) and drawing to a texture that is plastered on a 2D quad that is parallel to the camera.  SpriteBatch however takes care of this process for you, making it feel like you are still working in 2 dimensions. 

That isn’t it however, SpriteBatch is also a key optimization trick.  Consider if your scene consisted of hundreds of small block shape sprites each consisting of a small 32x32 texture, plus all of the active characters in your scene, each with their own texture being drawn to the screen.  This would result in hundreds or thousands of Direct3D or OpenGL draw calls, which would really hurt performance.  This is where the “Batch” part of sprite batch comes in.  In it’s default operating mode ( deferred ), a simply queues up all of the drawing calls, they aren’t executed until End() is called.  It then tries to “batch” them all together into a single draw call, thus rendering as fast as possible.

There are settings attached to a SpriteBatch called, specified in the Begin() that we will see shortly.  These are the same for every single Draw call within the batch.  Additionally you should try to keep every single Draw call within the batch in the same texture, or within as few different textures as possible.  Each different texture within a batch incurs a performance penalty.  You can also call multiple Begin()/End() pairs in a single render pass, just be aware that the Begin() process is rather expensive and this can quickly hurt performance if you do it too many times.  Don’t worry though, there are ways to easily organize multiple sprites within a single texture.  If by chance you actually want to perform each Draw call as it occurs you can instead run the sprite batch in immediate mode, although since XNA 4 (which MonoGame is based on), there is little reason to use Immediate mode, and the performance penalty is harsh.

One other major function of the SpriteBatch is handling blending, which is how overlapping sprites interact.

 

Sprite Blending

Up until now we’ve used a single sprite with no transparency, so that’s been relatively simple.  Let’s instead look at an example that isn’t entirely opaque.

Let’s go ahead an add a transparent sprite to our content project.  Myself I am going to use this one:

transparentSprite

… I’m sorry, I simply couldn’t resist the pun.  The key part is that your sprite supports transparency, so if you draw it over itself you should see:

transparentSpriteOverlay

 

Now let’s change our code to draw two sprites in XNA.

using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Input;

namespace Example2
{
    public class Game1 : Game
    {
        GraphicsDeviceManager graphics;
        SpriteBatch spriteBatch;
        Texture2D texture;

        public Game1()
        {
            graphics = new GraphicsDeviceManager(this);
            graphics.PreferredBackBufferWidth = 400;
            graphics.PreferredBackBufferHeight = 400;
            Content.RootDirectory = "Content";
        }

        protected override void LoadContent()
        {
            spriteBatch = new SpriteBatch(GraphicsDevice);
            texture = this.Content.Load<Texture2D>("transparentSprite");
        }

        protected override void Update(GameTime gameTime)
        {
            if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.
                Pressed || Keyboard.GetState().IsKeyDown(Keys.Escape))
                Exit();
            base.Update(gameTime);
        }

        protected override void Draw(GameTime gameTime)
        {
            GraphicsDevice.Clear(Color.CornflowerBlue);

            spriteBatch.Begin();
            spriteBatch.Draw(texture, Vector2.Zero);
            spriteBatch.Draw(texture, new Vector2(100,0));
            spriteBatch.End();

            base.Draw(gameTime);
        }
    }
}
 
... and run:
image
Pretty cool.

 

This example worked right out of the box for a couple reasons.  First, our sprite was transparent and identical, so draw order didn’t matter.  Also when we ran the content pipeline, the default importer ( and the default sprite batch blend mode ) is transparency friendly.

image

This setting creates a special transparency channel for your image upon import, which is used by the SpriteBatch when calculating transparency between images.

 

Let’s look at a less trivial example, with a transparent and opaque image instead.

using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Input;

namespace Example2
{
    public class Game1 : Game
    {
        GraphicsDeviceManager graphics;
        SpriteBatch spriteBatch;
        Texture2D texture;
        Texture2D texture2;

        public Game1()
        {
            graphics = new GraphicsDeviceManager(this);
            graphics.PreferredBackBufferWidth = 400;
            graphics.PreferredBackBufferHeight = 400;
            Content.RootDirectory = "Content";
        }

        protected override void LoadContent()
        {
            spriteBatch = new SpriteBatch(GraphicsDevice);
            texture = this.Content.Load<Texture2D>("logo");
            texture2 = this.Content.Load<Texture2D>("transparentSprite");
        }

        protected override void Update(GameTime gameTime)
        {
            if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.
                Pressed || Keyboard.GetState().IsKeyDown(Keys.Escape))
                Exit();
            base.Update(gameTime);
        }

        protected override void Draw(GameTime gameTime)
        {
            GraphicsDevice.Clear(Color.CornflowerBlue);

            spriteBatch.Begin();
            spriteBatch.Draw(texture, Vector2.Zero);
            spriteBatch.Draw(texture2, Vector2.Zero);
            spriteBatch.End();

            base.Draw(gameTime);
        }
    }
}
 
When run:

image

So far, so good.  Now let’s mix up the draw order a bit…

spriteBatch.Begin();
spriteBatch.Draw(texture2, Vector2.Zero);
spriteBatch.Draw(texture, Vector2.Zero);
spriteBatch.End();

… and run:

image

Oh…

As you can see, the order we make Draw calls is by default, the order the sprites are drawn.  As in the second Draw() call will draw over the results of the first Draw() call and so on.

 

There is a way to explicitly set the drawing order:

spriteBatch.Begin(sortMode: SpriteSortMode.FrontToBack);
spriteBatch.Draw(texture2, Vector2.Zero, layerDepth:1.0f);
spriteBatch.Draw(texture, Vector2.Zero, layerDepth:0.0f);
spriteBatch.End();

 

Here you are setting the SpriteBatch sort order to be front front to back, then manually setting the draw layer in each draw call.  If you are guessing, there is also a BackToFront setting.  SpriteSortMode is also what determines if drawing is immediate ( SpriteSortMode.Immediate ) or deferred ( SpriteSortMode.Differed ). 

 

Blend States

 

We mentioned earlier that textures imported using the Content Pipeline by default has a special pre-calculated transparency channel created.  This corresponds with SpriteBatches default BlendState, AlphaBlend.  This uses the magic value created by the pipeline to determine how overlapping transparent sprites are renderer.  If you don’t have a really good reason otherwise, and are using the Content Pipeline to import your textures, you should stick to the default.  I should point out, this behavior only became the default in XNA4, so older tutorials may have much different behavior.

 

The old default used to be interpolative blending, which used the RGBA values of the texture to determine transparency.  This could lead to some strange rendering artifacts ( discussed here: https://en.wikipedia.org/wiki/Alpha_compositing ).  The advantage is, all you need to blend images is an alpha channel, there was no requirement to create a special pre-multiplied channel.  This means you didn’t have to run these images through the content pipeline.  If you wish to do things the “old” way, when importing your assets ( if not simply loaded directly from file ) select false for PreMultiplied alpha in the Texture Importer Processor settings of the Content Pipeline.  Then in your SpriteBatch, do the following:

spriteBatch.Begin(blendState:BlendState.NonPremultiplied);
 
There are additional BlendState options including Additive ( colors are simply added together ) and Opaque ( subsequent draw calls simply overwrite the earlier calls ).  You can have a great deal of control over the BlendState, but most projects simply will not require it.  One other thing I ignored is Chromekeying.  This is another option for supporting transparency basically you dedicate a single color to be transparent, then specify that color in the Content Pipeline.  Essentially you are forming a 1bit alpha channel and are essentially “green screening” like in movies.  Obviously you cannot use the color in your image however.  In exchange for ugly source sprites and extra labor, you save in file size as you don’t need to encode the alpha channel.
 
 
There is some additional functionality built into SpriteBatch, including texture sampling, stencil buffers, matrix transforms and even special effects.  These are well beyond the basics though, so we will have to cover them at a later stage.
 
 

The Video

 

Programming , , ,

Month List

DisqusCommentsSummary