Subscribe to GameFromScratch on YouTube Support GameFromScratch on Patreon

23. June 2015

 

With the release of version 4.8 of Unreal Engine, playing audio actually became a great deal easier for 2D games with the addition of PlaySound2D.  In this section we are going to learn how to import and play audio files in Unreal Engine.  For the application controller I created a simple UI that fire off the playing of audio.  If unfamiliar with creating a UI with UMG ( Unreal Motion Graphics ), be sure to read the previous tutorial.

 

As always there is an HD video version of this tutorial available right here.

We are going to be creating a simple UI to fire off audio events:

image

 

We will simply wire each button to fire off our examples.  I also needed several audio samples.  I personally downloaded each one from freesound.org.

 

Importing Audio Files

 

First we need some audio to work with.  So then… what audio files work with Unreal Engine?  Mp3, mp4, ogg?  Nope… WAV.  You can import your sound files in whatever format you want, so long as it’s wav.  Don’t worry, this isn’t as big of a hindrance as it sounds, as Unreal simply takes care of the compression and conversion steps required for you.  So the fact your soundtrack is 10MB in size isn’t as damning as it seems, as Unreal will take care of the required conversions for you.  Being in an uncompressed source format enables Unreal to offer a lot of power as you will see shortly.  Also it neatly steps around a number of licensing concerns, such as the patent minefield that is mp3.  If you’re source files aren’t in wav format, you can easily convert using the freely available and completely awesome Audacity sound editor.

 

Your WAV files can be in PCM, ADPCM or DVI ADPCM format, although if using defaults you most likely don’t need to worry about this detail.  They should be 16 bit, little endian (again… generally don’t worry) uncompressed format at any bitrate. 22khz and 44.1khz are recommended however, with the later being the bit rate CD quality audio is encoded at.  Your audio files can be either mono (single channel) or stereo (dual channel), plus you can import up to 8 channels of audio ( generally 8 mono WAV files ) to encoded 7.1 surround sound.  This is way beyond the scope of what we will be covering but more details about 7.1 encoding can be found here.  Importing audio is as simple as using the Import button in the Content Browser, or simple drag and drop.

 

Once imported, you can double click your audio asset to bring up the editor.

image

 

Here you can set a number of properties including the compression amount, wether to loop, the pitch, even add subtitle information.  There isn’t anything we need to modify right now though.  I have imported a couple different mono format wav files, like so:

image

 

And created a simple button to play the audio when pressed:

image

 

Playing Sounds

 

Now let’s wire up the OnClick event to play Thunder.wav, with the following blueprint:

image

 

Yeah… that’s all you need to do, drop in a Play Sound 2D function, pick the Wave file to play and done.  Before 4.8 the only option was Play Sound at Location, which is virtually identical but required a Position component as well.  You can achieve the same effect this way:

image

 

Both Play Sound at Location and Play Sound 2D are fire and forget, in that you have no control over them after the sound has begun to play (other than at a global level, like muting all audio ).  Neither moves with the actor either.

 

What if you want the audio to come from or move with a node in the scene?  This is possible too.   First let’s create a Paper2D character to attach the audio component to.  This process was covered in this tutorial in case you need a refresher.  Don’t forget to create a GameMode as well and configure your newly created controller to be active.

 

Using the Audio Component

 

I created this hierarchy of a character:

image

Notice the Audio component I’ve added?  There are several properties that can be set in the Details panel for the audio component, but the most important is the sound.

image

I went ahead and attached my “music” Sound Wave.  You can set the music file to automatically play using the Activation property:

image

There is also an event available that will fire when your audio file has finished playing. 

image

Unlike PlaySound2D, this sound isn’t fire and forget.   It can also be changed dynamically using the following Blueprint:

image

This blueprint finds the Audio component of our Pawn and then set’s it’s Sound using a call to Play Sound Attached.  As you can see, there are several available properties to set and you can easily position the audio in the world.

 

As I mentioned earlier, you can also manipulate a running Sound wave when attached as an audio component, like so:

image

 

Paradoxically, there doesn’t actually seem to be a method to get the current volume.  The obvious solution is to keep the volume as a variable and pass it to Adjust Volume Level.

 

Sound Cues

So far we’ve only used directly imported Sound Wave files, but every location we used a Wave, we could have also used a Cue.  As you will see, Cues give you an enormous amount of control over your audio.

 

Start by creating a new Sound Cue object:

image

Name it then double click to bring up the Sound Que editor:

image

This is well beyond the scope of this tutorial, but you can essentially make complex sounds out of Sound nodes, like this simple graph mixing two sounds together:

image

 

Again, any of the earlier functions such as Play Sound 2D will take a Cue in place of a Wave.

 

We have only scratched the very surface of audio functionality built into Unreal Engine, but this should be more than enough to get you started in 2D.

 

The Video

Programming , , ,

22. June 2015

 

Export to web has been a popular bullet point on most game engine feature lists as of late.  Both Unreal Engine and Unity recently offered native HTML5 export (Unity has had a plugin option for years, but plugins have gone the way of the Dodo).  LibGDX has offered a HTML5 target for ages, Haxe based game engines can all cross compile to HTML or Flash.  GameMaker, Construct, Stencyl and more have HTML5 exporters.   Of course, there are a number of HTML5 native engines (I suppose Construct should be on that list actually…), but they aren’t really what this conversation is about.

 

At the end of the day though… is anyone actually doing it?  Is there a “shipped” title from any of these engines that people are playing in any quantity?  It is my understanding that the vast majority of commercial web and Facebook games are still Flash based, with HTML5 representing perhaps 10-20% of titles.  More importantly, most of these titles are relatively simple, nothing harnessing a fraction of the power of an Unreal or Unity type engine.

 

It leads me to question, is HTML5 a feature everybody wants but nobody uses?

 

What inspired this thought was this interesting article on Godot’s efforts to target the web.  I have been actively monitoring the game development world during the entire timespan Juan described and I witnessed all of these “next big thing” technologies that came and went.  Summarized from the article, they were:

  • Native Plugins
  • Google Native Client
  • Flash
  • ASM.js
  • WebAssembly

 

This is by no means a comprehensive list, LibGDX compiles from Java to JavaScript using Google’s GWT for example, and some of these technologies were certainly successful for a time such as Flash.  Of course too, games written and targeted for HTML5 exist.  It’s this whole HTML5 as an additional target approach that I am beginning to think is just a gimmick.  The funny part is, I love the idea too… I evaluate a lot of game engines and I always see “HTML5 export” and think “ooooh, that’s good”.

 

Games in the browser certainly seemed to have a brilliant future at one point.  The Unity web plugin really did bring Unity to the web.  I used a couple of Unity powered 3D tools, such as Mixamo, and it was very near to native in experience (they have since ported to HTML5 since the Unity plugin was end of life-d).  Google’s Native Client (NaCL) had some promise, with shipped titles such as AirMech leading the way, but a single browser solution was never going to fly.  Ultimately Flash, especially with it’s promising (at the time) 3D API had the most promise, but a combination of Apple’s malevolence and Adobe’s incompetence brought that to an inglorious end.

 

Perhaps it’s just ignorance on my end.  Can anyone out there point me at an HTML5 game (not a tech demo, an actual game) created in either Unity or Unreal Engine?  I realize both are fairly new, so I would also settle for examples that are under development but show promise.  Keep in mind here, I’m not talking about HTML5 game engines, there certainly is a future in HTML5 games and of course they can be wrapped and deployed to a variety of platforms.  No, it’s “HTML5 as an additional target platform” support that seems to be pure gimmick at this point.

Totally Off Topic

19. June 2015

 

Now we move on to a topic that people always seem to love, graphics!  In the past few chapters/videos I’ve said over and over “don’t worry, we will cover this later”, well… welcome to later. We are primarily going to focus on loading and displaying textures using a SpriteBatch.  As you will quickly discover, this is a more complex subject than it sounds.

 

As always, there is an HD video of the content available here

Before we can proceed too far we need a texture to draw.  A texture can generally be thought of as a 2D image stored in memory.  The source image of a texture can be in bmp, dds, dib, hdr, jpg, pfm, png, ppm or tga formats.  In the “real world” that generally means bmp, jpg or png formats and there is something to be aware of right away.  Of those three formats, only png and some jpgs have an alpha channel, meaning it supports transparency out of the box.  There are however ways to represent transparency in these other formats, as we will see shortly.  If you’ve got no idea which format to pick, or why, pick png.

 

 

Using the Content Pipeline

If you’ve been reading since the beginning you’ve already seen a bit of the content pipeline, but now we are going to actually see it in action with a real world example.  

Do we have to use the content pipeline for images?


I should make it clear, you can load images that haven’t been converted into xnb format. As of XNA 4, a simpler image loading api was added that allowed you to load gif, jpg and png files directly with the ability to crop, scale and save. The content pipeline does a lot for you though, including massaging your texture into a platform friendly format, potentially compressing your image, generation of mip maps or power of two textures, pre-multiplied alpha (explained shortly ), optimized loading and more. MonoGame included a number of methods for directly loading content to make up for it’s lack of a working cross platform pipeline. With the release of the content pipeline tool, these methods are deprecated. Simply put, for game assets ( aka, not screen shots, dynamic images, etc ), you should use the content pipeline.

Create a new project, then in the Contents folder, double click the file Content.mgcb.

image

 

This will open the MonoGame Content Pipeline tool.  Let’s add our texture file, simple select Edit->Add->Existing Item...

image

Navigate to a select a compatible image file.  When prompted chose the mode that makes the most sense.  I want the original to be untouched, so I am choosing Copy the file to the directory.

image

 

Your content project should now look like:

image

The default import settings for our image are fine, but we need to set the Content build platform.  Select Content in the dialog pictured above, then under Platform select the platform you need to build for.

image

Note the two options for Windows, Windows and WindowsGL.  The Windows platform uses a DirectX backend for rendering, while WindowsGL uses OpenGL.  This does have an effect on how content is processed so the difference is important. 

Now select Build->Build, saving when prompted:

image

 

You should get a message that your content was built.

image

We are now finished importing, return to your IDE.

Important Platform Specific Information


One Windows the .mgcb file is all that you need. When the IDE encounters it, it will basically treat it as a symlink and instead refer to the contents it contains. Currently when building on MacOS using Xamarin, you have to manually copy the generated XNB contents into your project and set their build type as Content. The generated files are located in the Output Folder as configured in the Content Pipeline. I have been notified that a fix for this is currently underway, so hopefully the Mac and Windows development experience will be identical soon.
 
Alright, we now have an image to work with, let’s jump into some code.
 
 
 

Loading and displaying a Texture2D

So now we are going to load the texture we just added to the content project, and display it on screen.  Let’s just jump straight into the code.

using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Input;

namespace Example1
{
    public class Game1 : Game
    {
        GraphicsDeviceManager graphics;
        SpriteBatch spriteBatch;
        Texture2D texture;

        public Game1()
        {
            graphics = new GraphicsDeviceManager(this);
            Content.RootDirectory = "Content";
        }

        protected override void Initialize()
        {
            base.Initialize();
        }

        protected override void LoadContent()
        {
            spriteBatch = new SpriteBatch(GraphicsDevice);
            texture = this.Content.Load<Texture2D>("logo");
        }

        protected override void UnloadContent()
        {
            //texture.Dispose(); <-- Only directly loaded
            Content.Unload();
        }

        protected override void Update(GameTime gameTime)
        {
            if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.
                Pressed || Keyboard.GetState().IsKeyDown(Keys.Escape))
                Exit();
            base.Update(gameTime);
        }

        protected override void Draw(GameTime gameTime)
        {
            GraphicsDevice.Clear(Color.CornflowerBlue);

            spriteBatch.Begin();
            spriteBatch.Draw(texture,Vector2.Zero);
            spriteBatch.End();

            base.Draw(gameTime);
        }
    }
}
 
When we run this code we see:
 
image
 
 
Obviously your image will vary from mine, but our texture is drawn on screen at the position (0,0).
 
There are a few key things to notice here.  First we added a Texture2D to our class, which is essentially the in memory container for our texture image.  In LoadContent() we then load our image into our texture using the call:
 
texture = this.Content.Load<Texture2D>("logo");
 
You notice we use our Game's Content member here.  This is an instance of Microsoft.Xna.Framework.ContentManager and it is ultimately responsible for loading binary assets from the content pipeline.  The primary method is the Load() generic method which takes a single parameter, the name of the asset to load minus the extension.  Notice the bold there?  That’s because this is a very common tripping point.  In addition to Texture2D, Load() supports the following types:
  • Effect
  • Model
  • SpriteFont
  • Texture
  • Texture2D
  • TextureCube

It is possible to extend the processor to support additional types, but it is beyond the scope of what we are covering here today.

Next we get to the UnloadContent method, where we simply call Content.Unload();  The ContentManager “owns” all of the content it loads, so this cleans up all of the memory for all of the objects loaded through the ContentManager.  Notice I left a commented out example calling Dispose().  It’s important to know if you load a texture outside of the ContentManager or create one dynamically, it’s is your responsibility to dispose of it or you may leak memory.  You may say, hey, this will all get cleaned up on program exit anyways.  Honestly this isn’t technically wrong, although cleaning up after yourself is certainly a good habit to get into. 

 

Memory Leaks in C#?


Many new to C# developers think because it's managed you can't leak memory. This simply isn't true. While compared to languages like C++, memory management is much simpler in C#, it is still quite possible to have memory leaks. In C# the easiest way is to not Dispose() of classes that implement IDisposable. An object that implements IDisposable owns an unmanaged resource (such as a Texture) and that memory will be leaked if someone doesn't call the Dispose() method. Wrapping the allocation in a using statement will result in Dispose() being called at the end of scope. As a point of trivia, other common C# memory leaks are caused by not removing event listeners and of course, calling leaky native code (pInvoke).
 
Now that we have our texture loaded, its time to display it on screen.  This is done with the following code:
    spriteBatch.Begin();
    spriteBatch.Draw(texture,Vector2.Zero);
    spriteBatch.End();

I will explain the SpriteBatch in a few moments, so let’s instead focus on the Draw() call.  This needs to be called within a Begin()/End() pair.  Let’s just say SpriteBatch.Draw() has A LOT of overloads, that we will look at now.  In this example we simply Draw the passed in texture at the passed in position (0,0).  Next let’s look at a few of the options we have when calling Draw().

Where is 0,0?


Different libraries, frameworks and engines have different coordinate systems. In XNA, like most windowing or UI libraries, the position (0,0) refers to the top left corner of the screen. For sprites, (0,0) refers to the top left corner as well, although this can be changed in code. In many OpenGL based game engines, (0,0) is located at the bottom left corner of the screen. This distinction becomes especially important when you start working with 3rd party libraries like Box2D, which may have a different coordinate system. Using a top left origin system has advantages when dealing with UI, as your existing OS mouse and pixel coordinates are the same as your game's. However the OpenGL approach is more consistent with mathematics, where positive X and Y coordinate values refer to the top right quadrant on a Cartesian plane. Both are valid options, work equally well, just require some brain power to convert between.

 

Translation and Scaling

spriteBatch.Draw(texture, destinationRectangle: new Rectangle(50, 50, 300, 300));
 
This will draw our sprite at the position (50,50) and scaled to a width of 300 and a height of 300.

image

 

Rotated

spriteBatch.Draw(texture, 
    destinationRectangle: new Rectangle(50, 50, 300, 300),
    rotation:-45f
    );

This will rotate the image –45degrees about it’s origin.

image

 

Notice that the rotation was performed relative to the top left corner of the texture.  Quite commonly when rotating and scaling you would rather do it about the sprites mid point.  This is where the origin value comes in.

 

Rotated about the Origin

spriteBatch.Draw(texture,
    destinationRectangle: new Rectangle(150 + 50,150 + 50, 300, 300),
    origin:new Vector2(texture.Width/2,texture.Height/2),
    rotation:-45f
    );

Ok, this one may require a bit of explanation.  The origin is now the midpoint of our texture, however we are going to be translating and scaling relative to our midpoint as well, not the top left.  This means the coordinates passed into our Rectangle need to take this into account if we wish to remained centered.  Also you need to keep in mind that you are resizing the texture as part of the draw call.  This code results in:

image

 

For a bit of clarity, if we hadn’t translated(moved) the above, instead used this code:

spriteBatch.Draw(texture,
    destinationRectangle: new Rectangle(0, 0, 300, 300),
    origin:new Vector2(texture.Width/2,texture.Height/2),
    rotation:-45f
    );
 
We would rotate centered to our sprite, but at the origin of our screen

image

 

So it’s important to consider how the various parameters passed to draw interact with each other!

 

Tinted

spriteBatch.Begin();
spriteBatch.Draw(texture, 
    destinationRectangle: new Rectangle(50, 50, 300, 300),
    color:Color.Red);
spriteBatch.End();
 
image
 
The Color passed in ( in this case Red ) was then added to every pixel in the texture. Notice how it only effects the texture, the Cornflower Blue background is unaffected.  The additive nature of adding red to blue resulted in a black-ish colour, while white pixels simply became red.
 
 
 

Flippped

spriteBatch.Draw(texture, 
    destinationRectangle: new Rectangle(50, 50, 300, 300),
    effects:SpriteEffects.FlipHorizontally|SpriteEffects.FlipVertically
    );

That's about it for draw, now let’s look a bit closer at SpriteBatch.

 

SpriteBatch

 

In order to understand exactly what SpriteBatch does, it’s important to understand how XNA does 2D.  At the end of the day, with modern GPUs, 2D game renderers no longer really exist.  Instead the renderer is actually still working in 3D and faking 2D.  This is done by using an orthographic camera ( explained later, don’t worry ) and drawing to a texture that is plastered on a 2D quad that is parallel to the camera.  SpriteBatch however takes care of this process for you, making it feel like you are still working in 2 dimensions. 

That isn’t it however, SpriteBatch is also a key optimization trick.  Consider if your scene consisted of hundreds of small block shape sprites each consisting of a small 32x32 texture, plus all of the active characters in your scene, each with their own texture being drawn to the screen.  This would result in hundreds or thousands of Direct3D or OpenGL draw calls, which would really hurt performance.  This is where the “Batch” part of sprite batch comes in.  In it’s default operating mode ( deferred ), a simply queues up all of the drawing calls, they aren’t executed until End() is called.  It then tries to “batch” them all together into a single draw call, thus rendering as fast as possible.

There are settings attached to a SpriteBatch called, specified in the Begin() that we will see shortly.  These are the same for every single Draw call within the batch.  Additionally you should try to keep every single Draw call within the batch in the same texture, or within as few different textures as possible.  Each different texture within a batch incurs a performance penalty.  You can also call multiple Begin()/End() pairs in a single render pass, just be aware that the Begin() process is rather expensive and this can quickly hurt performance if you do it too many times.  Don’t worry though, there are ways to easily organize multiple sprites within a single texture.  If by chance you actually want to perform each Draw call as it occurs you can instead run the sprite batch in immediate mode, although since XNA 4 (which MonoGame is based on), there is little reason to use Immediate mode, and the performance penalty is harsh.

One other major function of the SpriteBatch is handling blending, which is how overlapping sprites interact.

 

Sprite Blending

Up until now we’ve used a single sprite with no transparency, so that’s been relatively simple.  Let’s instead look at an example that isn’t entirely opaque.

Let’s go ahead an add a transparent sprite to our content project.  Myself I am going to use this one:

transparentSprite

… I’m sorry, I simply couldn’t resist the pun.  The key part is that your sprite supports transparency, so if you draw it over itself you should see:

transparentSpriteOverlay

 

Now let’s change our code to draw two sprites in XNA.

using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Input;

namespace Example2
{
    public class Game1 : Game
    {
        GraphicsDeviceManager graphics;
        SpriteBatch spriteBatch;
        Texture2D texture;

        public Game1()
        {
            graphics = new GraphicsDeviceManager(this);
            graphics.PreferredBackBufferWidth = 400;
            graphics.PreferredBackBufferHeight = 400;
            Content.RootDirectory = "Content";
        }

        protected override void LoadContent()
        {
            spriteBatch = new SpriteBatch(GraphicsDevice);
            texture = this.Content.Load<Texture2D>("transparentSprite");
        }

        protected override void Update(GameTime gameTime)
        {
            if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.
                Pressed || Keyboard.GetState().IsKeyDown(Keys.Escape))
                Exit();
            base.Update(gameTime);
        }

        protected override void Draw(GameTime gameTime)
        {
            GraphicsDevice.Clear(Color.CornflowerBlue);

            spriteBatch.Begin();
            spriteBatch.Draw(texture, Vector2.Zero);
            spriteBatch.Draw(texture, new Vector2(100,0));
            spriteBatch.End();

            base.Draw(gameTime);
        }
    }
}
 
... and run:
image
Pretty cool.

 

This example worked right out of the box for a couple reasons.  First, our sprite was transparent and identical, so draw order didn’t matter.  Also when we ran the content pipeline, the default importer ( and the default sprite batch blend mode ) is transparency friendly.

image

This setting creates a special transparency channel for your image upon import, which is used by the SpriteBatch when calculating transparency between images.

 

Let’s look at a less trivial example, with a transparent and opaque image instead.

using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Input;

namespace Example2
{
    public class Game1 : Game
    {
        GraphicsDeviceManager graphics;
        SpriteBatch spriteBatch;
        Texture2D texture;
        Texture2D texture2;

        public Game1()
        {
            graphics = new GraphicsDeviceManager(this);
            graphics.PreferredBackBufferWidth = 400;
            graphics.PreferredBackBufferHeight = 400;
            Content.RootDirectory = "Content";
        }

        protected override void LoadContent()
        {
            spriteBatch = new SpriteBatch(GraphicsDevice);
            texture = this.Content.Load<Texture2D>("logo");
            texture2 = this.Content.Load<Texture2D>("transparentSprite");
        }

        protected override void Update(GameTime gameTime)
        {
            if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.
                Pressed || Keyboard.GetState().IsKeyDown(Keys.Escape))
                Exit();
            base.Update(gameTime);
        }

        protected override void Draw(GameTime gameTime)
        {
            GraphicsDevice.Clear(Color.CornflowerBlue);

            spriteBatch.Begin();
            spriteBatch.Draw(texture, Vector2.Zero);
            spriteBatch.Draw(texture2, Vector2.Zero);
            spriteBatch.End();

            base.Draw(gameTime);
        }
    }
}
 
When run:

image

So far, so good.  Now let’s mix up the draw order a bit…

spriteBatch.Begin();
spriteBatch.Draw(texture2, Vector2.Zero);
spriteBatch.Draw(texture, Vector2.Zero);
spriteBatch.End();

… and run:

image

Oh…

As you can see, the order we make Draw calls is by default, the order the sprites are drawn.  As in the second Draw() call will draw over the results of the first Draw() call and so on.

 

There is a way to explicitly set the drawing order:

spriteBatch.Begin(sortMode: SpriteSortMode.FrontToBack);
spriteBatch.Draw(texture2, Vector2.Zero, layerDepth:1.0f);
spriteBatch.Draw(texture, Vector2.Zero, layerDepth:0.0f);
spriteBatch.End();

 

Here you are setting the SpriteBatch sort order to be front front to back, then manually setting the draw layer in each draw call.  If you are guessing, there is also a BackToFront setting.  SpriteSortMode is also what determines if drawing is immediate ( SpriteSortMode.Immediate ) or deferred ( SpriteSortMode.Differed ). 

 

Blend States

 

We mentioned earlier that textures imported using the Content Pipeline by default has a special pre-calculated transparency channel created.  This corresponds with SpriteBatches default BlendState, AlphaBlend.  This uses the magic value created by the pipeline to determine how overlapping transparent sprites are renderer.  If you don’t have a really good reason otherwise, and are using the Content Pipeline to import your textures, you should stick to the default.  I should point out, this behavior only became the default in XNA4, so older tutorials may have much different behavior.

 

The old default used to be interpolative blending, which used the RGBA values of the texture to determine transparency.  This could lead to some strange rendering artifacts ( discussed here: https://en.wikipedia.org/wiki/Alpha_compositing ).  The advantage is, all you need to blend images is an alpha channel, there was no requirement to create a special pre-multiplied channel.  This means you didn’t have to run these images through the content pipeline.  If you wish to do things the “old” way, when importing your assets ( if not simply loaded directly from file ) select false for PreMultiplied alpha in the Texture Importer Processor settings of the Content Pipeline.  Then in your SpriteBatch, do the following:

spriteBatch.Begin(blendState:BlendState.NonPremultiplied);
 
There are additional BlendState options including Additive ( colors are simply added together ) and Opaque ( subsequent draw calls simply overwrite the earlier calls ).  You can have a great deal of control over the BlendState, but most projects simply will not require it.  One other thing I ignored is Chromekeying.  This is another option for supporting transparency basically you dedicate a single color to be transparent, then specify that color in the Content Pipeline.  Essentially you are forming a 1bit alpha channel and are essentially “green screening” like in movies.  Obviously you cannot use the color in your image however.  In exchange for ugly source sprites and extra labor, you save in file size as you don’t need to encode the alpha channel.
 
 
There is some additional functionality built into SpriteBatch, including texture sampling, stencil buffers, matrix transforms and even special effects.  These are well beyond the basics though, so we will have to cover them at a later stage.
 
 

The Video

 

Programming , , ,

18. June 2015

 

So I decided to take a look at the process of extending the Godot Engine, and to my horror I discovered there is no solution file!  UGH…  Yeah, you can compile from the command line, but that’s not that pleasant of a development experience to an IDE warrior like myself.  I thought I found a solution that I posted about here and I almost did, but it still left me unable to debug and with mostly broken Intellisense support… so, yeah, not great.  Then I found this post, which although it didn’t work, it did fill in the missing pieces.  So, if you want to work with Godot using Visual Studio, here’s how you do it.

 

First you need to have a properly configured development environment… meaning Visual Studio, git and python, all in your PATH.  Next launch a Visual Studio command line, this guy…

image 

 

Make and change to the directory you want to install Godot.  Keep in mind this process will make a godot folder, so if you want c:\godot\ run the following from c:\.

 

git clone https://github.com/okamstudio/godot.git

This will download the latest source for Godot Engine.  Next we want to build it for the first time and generate a Visual Studio solution file.  cd into the Godot directory and run:

scons vsproj=yes platform=windows

If you get an error here, first be sure you are using a Visual Studio command prompt, next be certain you are in the correct directory.  Otherwise your computer should churn away for a few minutes while godot libraries and tools are built.

After several minutes, in an ideal world you should see:

image

 

This means Godot successfully built and it created your Visual Studio project files.  Woot.  Now time to get Visual Studio to actually work.

 

First in the root of your project directory ( C:\Godot\ in my case ), create a file named build.bat, with the following contents:

set vc_path=%1
call %vc_path% & scons platform=windows

 

Next load the generated sln file in Visual Studio.  Give it a minute or two to parse all the files.  You will notice massive amounts of Intellisense errors, don’t worry, we will fix those next.

In Solution Explorer, right click your Project ( not solution, Project! ) and select Properties.

image

Select VC++ Directories then double click Include Directories:

image

Append the following to the value in Include Directories:

$(ProjectDir);$(ProjectDir)/core;$(ProjectDir)/core/math;$(ProjectDir)/tools;$(ProjectDir)/drivers;$(ProjectDir)/platform/windows;

This adds the include directories Godot depends on.

 

Next click NMake on the left hand side.  We now want to replace the contents of Build Command Line and Output to:

image

Then click Apply then OK.

 

You should now be able to hit Build->Build Solution.  If all went right, you should see build progress in the Output panel:

image

 

You can now run and debug as normal, set breakpoints within the code, hit F5 and Godot editor will run.  Keep in mind, it’s the Godot Editor you are debugging, not the library, although since you have full source you should easily be able to step into Godot core code, you just may not be able to set breakpoints there.

 

You are now ready to extend or contribute to Godot using the full power and comfort of Visual Studio.

 

The following is a video showing exactly this process, just in case I missed a step.

 

Programming ,

18. June 2015

 

I’ve never really been been much into the various cross platform build tools like SCons or CMake, I just don’t maintain all that much cross platform code, so all the extra initial effort just hasn’t been worth it.

 

Well last night I found myself wanting to edit Godot’s source code, so let me just fire up Visual Studio and…  ahhhh crap, no Visual Studio project!  Hmmm, this is just annoying.  I don’t mind building from the command line but I certainly mind giving up my IDE and more importantly Intellisense.  So I resign with having to create my own Visual Studio solution (sln) file.  This is always a somewhat annoying process as you’ve got to figure out all the various dependencies and recreate them.

 

Then I have a thought… hmmmm, I wonder if this functionality is built into SCons?  I mean, CMake is essentially a build file builder ( it generates a project file for your required platform/compiler of choice ), perhaps this functionality is built into scons.  Lo and behold, it is!  If you are working on Windows with Visual Studio, simply fire up a Visual Studio command line so you have the proper environment variables set.

 

Then simply change to the directory you installed the Godot source ( or your other SCons built project, just locate the SConstruct file ) and type:

scons vsproj=yes platform=windows

… and voila.

image

The only downside is the NMake build commands don’t work.  If you check your project you see:

image

These are the commands that are run when you choose Build/Rebuild and Clean, and basically each just call Scons.  Oddly for me though they don’t seem to run in the proper environment ( VC isn’t found ) and I can’t specify the platform=windows required to run. 

 

I’m at a bit of a loss however how I would actually debug the project, which is somewhat annoying…

 

EDIT:

Hmmm, this seems to set me up for editing which is cool, but I believe I lose access to the single best feature of Visual Studio… the debugger.

Programming

Month List

Popular Comments

Faceware Release New MoCap Software, Including A Free PLE Version
Subscribe to GameFromScratch on YouTube Support GameFromScratch on Patreon


8. March 2016

 

Faceware Technologies, a provider of “markerless” facial motion capture solutions used in games such as NBA2K16 and Destiny, just released new versions of their software, including a free PLE (Personal Learning Edition) version.  That is a term that I though went away with Maya PLE, I’m somewhat surprised to see it again.  In addition to the free edition, they have also updated their marque applications as well as creating a new rental program.

FTI_WEB_LOGO_HEADER

First, let’s cover the free version of Faceware.  The Personal Learning Edition is a free license that is limited to non-commercial use.  From the Faceware announcement:

More and more individual content creators are adopting Faceware’s technologies to learn or improve their facial mocap and animation skills. The Personal Learning Edition (PLE) was designed with those users in mind. PLE is a free license of Analyzer 3.0 and Retargeter 5.0 for individual, non-commercial use, including research. The PLE includes all of the functionality of the Studio version of Analyzer 3.0 and Retargeter 5.0 (see below), and will be kept on feature parity with the latest versions of those packages, so individual artists will always have access to the latest facial motion capture functionality from Faceware.  For universities and schools, Analyzer 3.0 and Retargeter 5.0 will still be available in-lab and classroom licenses for adoption into relevant curriculums.  

 

There were also updates to their Analyzer and Retargeter software packages.  The changes to Analyzer 3.0:

Analyzer is Faceware’s award-winning markerless facial motion tracking software. Based on advanced computer vision technology, it converts any video of an actor’s facial performance into facial motion files for use in Faceware’s companion software, Retargeter. To enhance Analyzer’s usability for artists in different countries, Faceware has localized Analyzer into nine additional languages: Japanese, Chinese (Simplified), Korean, French, German, Spanish (Castilian), Russian, Polish and Arabic. Analyzer will now also support timecode, editing of in/out points of any new video, and the ability to capture live video straight into Analyzer’s workflow using any of Faceware’s hardware systems. All new features will be available in both Analyzer 3.0 Studio and Studio Plus versions.

While here are the details on Retargeter:

Faceware’s award-winning Retargeter software maps facial motion capture data from Analyzer onto any facial rig through a plug-in for Autodesk Maya, 3DS Max, and MotionBuilder. Like Analyzer, Retargeter 5.0 has been localized into nine additional languages and it supports timecode. Other highlights include an updated shared pose library workflow as well as general speed improvements.  All new features will be available in both Studio and Studio Plus versions of Retargeter 5.0.  

 

Faceware also announced a complete rental option, including hardware and software, on a month by month basis:

For commercial projects that need to be completed on relatively tight schedules, studios are now able to rent Faceware’s entire real-time and creative suite software lineup in 30-day increments. Rental costs start at $340 USD per month.

 

For more details, see the complete release here.

GameDev News

blog comments powered by Disqus

Month List

Popular Comments