Subscribe to GameFromScratch on YouTube Support GameFromScratch on Patreon
23. November 2015

 

 

In this tutorial we are going to look at audio programming in Cocos2d-x.  We will look at playing music and sound effects using SimpleAudioEngine, one of two engines built into Cocos2d-x.  There is a second, newer, more commplex and experimental audio engine AudioEngine, that we will discuss later.  Let’s start straight away by playing some music.  To make use of SimpleAudioEngine we need to add an additional include call:

#include "SimpleAudioEngine.h"

 

Next we need a song of some sorts to play, simply copy an appropriate file into your resources folder.  Myself I used an mp3 file named creatively enough song.mp3.

Supported Audio File Formats


The audio formats supported by Cocos2D-x depend entirely on what platform you run on.  The primary thing to be aware of is the ogg format is the preferred music format on Android platforms, while it is completely unsupported on iOS, which prefers MP3.

You should be aware that the MP3 format is patent encumbered format and generally should be avoided when possible.  If your app reaches certain sales thresholds, you may be required to pay license fees.  Sadly this generally isn’t an option on iOS devices as MP3 is the primary audio format used.  For sound effects, WAV are commonly used offering quick playback ability at the cost of file size.  Here are the details of supported audio files on iOS

Time now for some coding.  Implement the following init() method:

bool HelloWorld::init()
{
   if (!Layer::init())
      return false;

   auto audio = CocosDenshion::SimpleAudioEngine::getInstance();
   audio->preloadBackgroundMusic("song.mp3");
   audio->playBackgroundMusic("song.mp3");

   return true;
}

That is all that is required to load and play a music file. In fact the preloadBackgroundMusic() call wasn't even required so we could have used even less code. However preloading your music guarantees that you will not suffer a slow down the first time a song plays. You can also pause and resume playback of background music, or switch tracks completely, like so:

   eventListener->onKeyPressed = [audio](EventKeyboard::KeyCode keyCode, Event* event) {

      switch (keyCode) {
         case EventKeyboard::KeyCode::KEY_SPACE:
            if (audio->isBackgroundMusicPlaying())
               audio->pauseBackgroundMusic();
            else
               audio->resumeBackgroundMusic();
            break;

         case EventKeyboard::KeyCode::KEY_RIGHT_ARROW:
            audio->playBackgroundMusic("song2.mp3");
            break;

         case EventKeyboard::KeyCode::KEY_LEFT_ARROW:
            audio->playBackgroundMusic("song.mp3");
            break;
      }
   };

   _eventDispatcher->addEventListenerWithFixedPriority(eventListener, 2);

Hitting the spacebar will toggle the playback of the currently playing song.  Hitting the right arrow will start playing (or start over if already playing) song2.mp3. Hitting the left arrow will start or re-start playback of song.mp3.  You will notice from this example that only one song can be played at a time.  Generally this isn’t a limitation as it is normal to only have one active sound track at a time. 

setBackgroundMusicVolume() doesn't work!


A bit of a warning, at least on Windows, calling setBackgroundMusicVolume() does nothing, making it impossible to change the volume of a playing music file. This may not be the case on other platforms, I did not test. It was filed as a bug a long time back and does not appear to have been addressed.

 

Now let's look at playing sound effects instead.  Playing music and effects is almost identical.  The biggest difference is that sound effects are expected to support multiple concurrent instances.  That is to say, while you can only play one song at a time, you can play multiple sound effects at once. Consider this sample:

bool HelloWorld::init()
{
   if (!Layer::init())
      return false;

   auto audio = CocosDenshion::SimpleAudioEngine::getInstance();
   
   audio->preloadEffect("gun-cocking-01.wav");
   audio->preloadEffect("gun-shot-01.wav");

   audio->playEffect("gun-cocking-01.wav");

   Director::getInstance()->getScheduler()->schedule([audio](float delta) {
      audio->playEffect("gun-gunshot-01.wav");
      audio->unloadEffect("gun-cocking-01.wav");
   }, this, 1.5f, 0, 0.0f, false, "myCallbackKey");

   return true;
}

 

In this example we preload two WAV sound effects, a gun cocking and a gun shot.  Playing a sound effect is as simple as calling playEffect() passing in the file name.  Of course, be certain to copy the appropriate sound files to your project’s resource folder before running this example.  Next this example queues up a lambda method to be called 1.5 seconds of the gun cocking sound is played to play our gun shot sound.  At this point we are done with our gun cocking effect so we unload it from memory using unloadEffect().  You can still call playEffect with that file in the future, but it will result in the file being loaded again.

 

This example might be somewhat convoluted, but it illustrates and works around a key weakness in the CocosDenshion audio library.  It is a very simple and straight forward library but if you want to do “advanced” things like detecting when a song or audio effect has ended, unfortunately this functionality is not available.  You either have to use the experimental AudioEngine, which we will cover later, or use an external audio library such as FMOD.  SimpleAudioEngine is extremely easy to use, but not very powerful, so it’s certainly a trade off.  If you just need background music and fire and forget sound effects SimpleAudioEngine should be just fine for you.

 

One final topic to cover is handling when your app is minimized or forced into the background, you most certainly want to stop audio playback.  This is thankfully easily accomplished in your AppDelegate there are a pair of methods, applicationDidEnterBackground() and applicationWillEnterForeground().  Simply add the following code:

void AppDelegate::applicationDidEnterBackground() {
   auto audio = CocosDenshion::SimpleAudioEngine::getInstance();
   audio->pauseAllEffects();
   audio->pauseBackgroundMusic();
}

void AppDelegate::applicationWillEnterForeground() {
   auto audio = CocosDenshion::SimpleAudioEngine::getInstance();
   audio->resumeAllEffects();
   audio->resumeBackgroundMusic();
}

 

This will cause all of your currently playing sound effects and music files to be paused when your application enters the background and they will all result when your application regains focus.

Programming , ,

20. November 2015

 

Recently I just finished a tutorial covering creating isometric maps using the Tiled map editor.  One commonly occurring question was “where can I get tiled-friendly isometric tile sets?” for free.  The answer unfortunately is there aren’t a ton of options.  There are a few good isometric tile sets out there (I may coverisometrictitle this in a different post), but not a ton of choices.  So instead I’ve decided to look at the process of making isometric tiles in Blender and thankfully it’s not that difficult.  The process follows.  There is also an HD video version of this tutorial available here.

 


First load up blender.  We are going to start by just making a 64x64 pixel tile of the default Blender cube.  For this exercise we are going to have our object centered about the origin.  Precision is of critical importance to this entire process, so we are going to be working numerically in the properties panel(N key).

 

image10

 

Now let’s delete all of the faces except the base:

GIF

 

The majority of the work is actually in setting up the camera.  With the camera selected we need to position it.  Since our object is at the origin, we need to offset away from it.  As the isometric style is from above and at an angle we offset 60 degrees on the X axis and 45 degrees around the Z axis.  If you are using a more skewed perspective ( not 2:1 ratio ), you will choose a different  rotation.  We also move the camera back along the Y axis and up the Z, we will be moving the location slightly later.

image

 

Next we set the dimensions of our camera in the Camera tab, we are going to start of with 64x32 at 100%, like so:

image

 

Now in the Properties editor, select the camera icon and we need to set the camera to Orthographic.  We then adjust our Orthographic scale until the camera extents capture the cube.

image

 

In 3D view switch to camera view (0) then adjust the Orthographic Scale and Z axis of the camera until your image is framed, like so:

image

 

Now change the camera dimensions from 64x32 to 64x64 and adjust the z value of the camera until the base tile is aligned to the bottom of the camera frame, like so:

image

 

Now render it out to texture.  We already set up the render dimensions, now we simply need to set the renderer to render transparency:

image

 

In Shading in the Rendering tab set alpha mode to Transparent.  Now render your tile and TADA:

tile

 

Now let’s extrude our tile to the extents of our camera frame, like so:

image

 

Then render it out:

tile2

 

Now you can test our two isometric tiles out in tiled:

image

 

Now you will notice the tiles look remarkably, um, tiled.  This is due to the shading.  In the materials for your tile, set Shadeless on, then the results look cleaner.

image

 

Creating a Reference Cube

It’s possible you want a framework to model non-cube shaped entities in an isometric style.  Thankfully it’s incredibly easy to create such a thing. 

First take your reference cube, then apply a Wireframe modifier to it:

image

 

This now gives you a nice shape to use for reference, like so:

image

 

Now chances are if you are going to model within the reference you are going to want to make it unselectable and mark it so it doesn’t show up in the renders, like so:

image

 

Now of course you will probably want bigger than 64x64 tiles.  One option is to simply duplicate our cube, then move it by 2 units (as our cube is 2x2 in size) along the X axis, like so:

image

 

We can then render each tile by moving the camera by 2 along the X axis, setting all other tiles to not render, although this could easily be scripted.

 

Chances are we are going to want to make several tiles at once too and this would also be extremely easy.  Simply take your reference tile and apply an array modifier then another array modifier, like so:

image

 

The only difference between the two modifiers is the axis of the relative offset.  This now gives up a 6x6 grid of tiles to work with:

image

 

Only problem is, now we have one big tile instead of 36 separate tiles, but no matter, this is easily resolved!  Make sure both Array modifiers have been applied.

 

Now enter Edit mode and press P –> Separate by loose parts.

Next select them all in Object mode and select Set Origin->Origin to Center of Mass.

 

And TADA, 36 tiles ready for editing!

GIF2

 

Of course if you wanted to edit the tiles as a whole (for example, connected terrain) you would do this before performing the separation. 

 

Next we have to render each tile.  Basically grab a tile, translate it to the origin, hide the other 35 tiles, render it.  Now repeat that process 35 times.  Sounds fun, eh?

Yeah, not to me either, good thing I’m a programmer. Let’s take a look at automating this using Python in Blender’s built in API.   Please keep in mind, I rarely program in Python and have never scripted in Blender, so don’t expect brilliance!

import bpy

objs = []
for obj in bpy.data.objects:
    if obj.name.startswith("Cube"):
        objs.append(obj)

# loop true all the meshes, hide all but active, and copy active to 0,0,0 and render        
for curObj in objs:
    prevLoc = curObj.location.copy()
    curObj.hide_render = False
    
    for otherObj in objs:
        if curObj != otherObj:
            otherObj.hide_render = True
            
    curObj.location = (0,0,0)
    bpy.data.scenes['Scene'].render.filepath = "c:\\temp\\" + curObj.name + ".png"
    bpy.ops.render.render(write_still = True)
    
    curObj.hide_render = True
    curObj.location = prevLoc

 

Now if you run that script it will render all of the tiles to your C:\temp directory.

 

Of course the key to making tiles look good is going to be the texture mapping…  but you’ve got all the tools you need to succeed and quickly at that.

 

The Video

Art, Programming, Design , , ,

19. November 2015

 

I remember using Allegro wayyyyyy back in the day in the early 1990s.  It was one of few graphics libraries available for DOS based machines (even though it started life as an Atari library) and certainly one of the only free game libraries available.  Amazingly enough Allegro is still under active development.  Anyways enough reminiscing…  today Allegro.js was released, an HTML5 library inspired by Allegro.  For a brand new library there is already an impressive amount of documentation and reference material available.  One of the hallmarks of the Allegro library was it was brutally simple to use and Allegro.js seems to have carried on that tradition.  Here is an example Allegro app:

 

// bitmap oobjects
var logo,ball;

// sample object
var bounce;

// size and speed of the ball
var size=64,speed=5;

// positon of the ball
var cx=100,cy=100;

// velocity of the ball
var vx=speed,vy=speed;

// drawing function
function draw()
{
   // draw allegro logo background
   stretch_blit(logo,canvas,0,0,logo.w,logo.h,0,0,SCREEN_W,SCREEN_H);
   
   // draws the ball resized to size*size, centered
   // stretch it a bit vertically according to velocity
   stretch_sprite(canvas,ball,cx-size/2,cy-size/2,size,size+abs(vy));
}

// update game logic
function update()
{
   // did the ball bounce off the wall this turn?
   var bounced=false;

   
   // if the ball is going to collide with screen bounds
   // after applying velocity, if so, reverse velocity
   // and remember that it bonced
   if (cx+vx>SCREEN_W-size/2) {vx=-speed;bounced=true;}
   if (cy+vy>SCREEN_H-size/2) {vy=-speed*3;bounced=true;}
   if (cx+vx<size/2) {vx=speed;bounced=true;}
   if (cy+vy<size/2) {vy=speed;bounced=true;}
      
   // move the ball
   cx+=vx;
   cy+=vy;
   
   // if it bounced, play a sound
   if (bounced) play_sample(bounce);
   
   // add gravity
   vy+=.3;
}

// entry point of our example
function main()
{
   // enable debugging to console element
   enable_debug("debug");
   
   // init all subsystems, put allegro in canvas with id="canvas_id"
   // make the dimesnions 640x480
   allegro_init_all("canvas_id", 640, 480);
   
   // load ball image
   ball = load_bmp("data/planet.png");
   
   // load background image
   logo = load_bmp("data/allegro.png");
   
   // load the bounce sound
   bounce = load_sample("data/bounce.mp3");

   // make sure everything has loaded
   ready(function(){
      
      // repeat this game loop
      loop(function(){
         
         // clear screen
         clear_to_color(canvas, makecol(255, 255, 255));

         // update game logic
         update();

         // render everything
         draw();
   
      // all this happens 60 times per second
      }, BPS_TO_TIMER(60));
   });
   
   // the end
   return 0;
}
// make sure that main() gets called as soon as the wesbite has loaded
END_OF_MAIN();

 

If this looks interesting to you be sure to check out Allegro.js.  It looks like a cool library, based on another cool library, just waiting for a community to form around it.

GameDev News, Programming ,

19. November 2015

 

Just a quick note to announce a new series I have started, The GameDev Toolbox.

Toolbox

 

The series is intended to bring exposure to a wide variety of tools used in game development.  This includes graphic creation packages, IDEs, audio tools, design tools and more.  Basically if it isn’t a library, framework or game engine and it’s used for game development we will probably take a look at it.

These are not reviews, nor are they tutorials.  They are meant as a quick introduction and overview and nothing more.  For now the series is video only, but I will be creating a text based version in the near future.  There is a playlist for the series available here.

It currently consists of:

 

This video shows the premise of the series:

Art, Design, General, Programming

15. November 2015

 

We already have motion controls, audio control and of course tactile control, what about controlling a game with your emotions?  It sounds far fetched but it really isn’t, in fact it’s here today.  Just about every single modern gaming device has a front facing camera available in some form or another.  Well, Microsoft Research lab Microsoft Project Oxford just announced the release of the emotion tool which enables you to detect emotions on a persons face.

 

Basically you send it a picture and it returns the probability of various emotions.  For example, I sent it this image here and it returned the following results:

image

 

Basically you feed it an image in URL form and optionally a rectangle selecting the face, and it returns a JSON result with the probability of each emotion.  In this case Microsoft determine with 99.99% likelihood that this baby is sad, with a 0.001% chance that it’s angry instead.  I fed it a handful of pictures and it did a remarkably accurate job.

 

You can use it as a REST API now ( or just play with the playground ) right here.  Otherwise the SDK can be downloaded here.

 

So the tech is actually here already and works pretty well.  The question remains… would gamers embrace having a camera constantly scanning their emotions?  I actually can think of a few genres or tools where they might.  How many “you mad bro?” apps can we expect to flood the market now? If nothing else, it’s a fun tool to play around with on a Sunday afternoon!

Programming, Totally Off Topic

Month List

Popular Comments