GameMaker Standard is now free... I think

25. November 2013

The standard edition of the popular GameMaker software has just gone free, a 50$ savings.  As was mentioned in this reddit post as well as this blog post if you update to the latest beta channel, you get the opportunity to upgrade to Standard from Free edition for free.  However if you check out the forums a fair number of people seem to be having issues, mostly existing customers it would appear.  Also, for reasons I don’t quite understand, this opportunity is currently unavailable for people using GameMaker from Steam.

 

If you’ve never heard of it, GameMaker is a higher level game development system aimed at the beginner segment for creating 2D games.  It offers a drag and drop environment and is controlled using GML, Game Makers custom scripting language.  Somewhat recently YoYo released a native compiler that massively increases GML performance, however this is not a standard feature. The primary difference between the free version and the standard version is the free version forced a splash screen on you and greatly limited the number of resources your game could use.  In addition to the free release of Standard, Tizen support is also free, most likely to correspond with their Tizen contest.

 

If you view GameMaker as just a toy, keep in mind it has been used to create a number of very successful titles including Spelunky and Hotline Miami.  On the other hand, Yoyo Games have also pulled some mind numbingly stupid mistakes.  GameMaker may not be for everyone, myself included, but for many it is a great option, especially if you aren’t primarily a programmer.

 

So, if you want to check out GameMaker, now is a good time… just don’t do it on Steam!  What’s odd is the complete lack of formal announcement from YoYo.




LibGDX Tutorial 8: Audio

19. November 2013

 

This section is going to be rather small because, well frankly, LibGDX makes audio incredibly easy.  Unlike previous tutorials, this one is going to contain a number of snippets.  LibGDX supports 3 audio formats: ogg, mp3 and wav.  MP3 is a format that is mired in legal issues, while WAV is a rather large ( file size ) format, leaving OGG as often the best choice.  That said, when it comes to being broadly supported ( especially in browsers ), Ogg can have issues.  This of course is why multiple formats exist and continue to be used!

Playing Sound Effects

 

Loading a sound file is trivial.  Like you did earlier with fonts or graphic formats, you need to add the files to assets folder in the android project folder.  Like earlier, I followed convention and put everything in the data subdirectory like so:

wyabq5gw

 

As you can see, I added a file of each format, mp3.mp3, ogg.ogg and wav.wav.

 

Loading any of these files is incredibly simple:

Sound wavSound = Gdx.audio.newSound(Gdx.files.internal("data/wav.wav"));
Sound oggSound = Gdx.audio.newSound(Gdx.files.internal("data/ogg.ogg"));
Sound mp3Sound = Gdx.audio.newSound(Gdx.files.internal("data/mp3.mp3"));

This returns a Sound object using the specified file name.  Once you have a Sound, playing is trivial:

wavSound.play();

You also have the option of setting the play volume when calling play, such as:

oggSound.play(0.5f);

This plays the oggSound object at 50% volume for example.

 

In addition to play() you can also loop() to well, loop a Sound continuously.  When you play a sound it returns an id that you can use to interact with the sound.  Consider:

long id = mp3Sound.loop();
Timer.schedule(new Task(){
   @Override
   public void run(){
      mp3Sound.stop(id);
      }
   }, 5.0f);

 

Here you start an mp3 file looping, which returns an id value.  Then we schedule a task to run 5 seconds later to stop the sound from playing.  Notice how in the call to stop() an id is passed?  This allows you to manage a particular instance of a sound playing.  This is because you can play the same Sound object a number of times simultaneously.  One important thing to be aware of, Sound objects are a managed resource, so when you are done with them, dispose().

wavSound.dispose();
oggSound.dispose();
mp3Sound.dispose();

 

Once you have a sound, there are a number of manipulations you can do.  You can alter the pitch:

long id = wavSound.play();
wavSound.setPitch(id,0.5f);

 

The first parameter is the sound id to alter, the second value is the new pitch ( speed ).  The value should be > 0.5 and < 2.0.  Less than 1 is slower, greater than 1 is faster.

You can alter the volume:

long id = wavSound.play();
wavSound.setVolume(id,1.0f);

 

Once again, you pass the id of the sound, as well as the volume to play at.  A value of 0 is silent, while 1 is full volume.  As well you can set the Pan ( stereo position ), like so:

long id = wavSound.play();
wavSound.setPan(id, 1f, 1f);

In this case the parameters are the sound file id, the pan value ( 1 is full left, 0 is center, –1 is full right ) as well as the volume.  You can also specify the pitch, pan and volume when calling play() or loop().  One important note, none of these methods are guaranteed to work on the WebGL/HTML5 backend.  Additionally file format support varies between browsers ( and is very annoying! ).

 

Streaming music

 

In addition to playing sound effects, LibGDX also offers support for playing music ( or longer duration sound effects! ).  The big difference is LibGDX will stream the effect in this case, greatly lowering the demands on memory. This is done using the Music class.  Fortunately it’s remarkably simple:

Music mp3Music = Gdx.audio.newMusic(Gdx.files.internal("data/RideOfTheValkyries.mp3"));
mp3Music.play();

 

And that’s all you need to stream an audio file.  The controls are a bit different for a Music file.  First off, there is no id, so this means you can play multiple instances of a single Music file at once.  Second, there are a series of VCR style control options.  Here is a rather impractical example of playing a Music file:

 

Music mp3Music = Gdx.audio.newMusic(Gdx.files.internal("data/RideOfTheValkyries.mp3"));
mp3Music.play();
mp3Music.setVolume(1.0f);
mp3Music.pause();
mp3Music.stop();
mp3Music.play();
Gdx.app.log("SONG",Float.toString(mp3Music.getPosition()));

 

After our Music file is loaded, we start it, then set the volume to 100%.  Next we pause, then stop, then play our music file again.  As you can see from the log() call, you can get the current playback position of the Music object by calling getPosition().  This returns the current elapsed time into the song in seconds.  You may be wondering exactly what the difference is between pause() and stop()?  Calling play() after pause() will continue playing the song at the current position.  Calling play() after calling stop() will restart the song.

Once again, Music is a managed resource, so you need to dispose() it when done or you will leak memory.

 

Recording and playing PCM audio

 

LibGDX also has the ability to work at a lower level using raw PCM data.  Basically this is a short (16bit) or float (32bit) array of values composing the wavform to play back.  This allows you to create audio effects programmatically.  You can also record audio into PCM form.  Consider the following example:

AudioDevice playbackDevice = Gdx.audio.newAudioDevice(44100, true);
AudioRecorder recordingDevice = Gdx.audio.newAudioRecorder(44100, true);
short[] samples = new short[44100 * 10]; // 10 seconds mono audio
recordingDevice.read(samples, 0, samples.length);
playbackDevice.writeSamples(samples, 0, samples.length);
recordingDevice.dispose();
playbackDevice.dispose();

 

This example creates an AudioDevice and AudioRecorder.  In both functions you pass the desired sampling rate ( 44.1khz is CD audio quality ) as well as a bool representing if you want mono ( single channel ) or stereo ( left/right ) audio.  Next we create an array to record our audio into.  In this example, we want 10 seconds worth of audio at the 44.1khz sampling rate.  We then record the audio by calling the read() method of the AudioRecorder object.  We pass in the array to write to, the offset within the array to start at and finally the total sample length.  We then playback the audio we just recording by calling writeSamples, using the exact same parameters.  Both AudioDevice and AudioRecorder are managed resources and thus need to be disposed.

 

There are a few very important things to be aware of.  First, PCM audio is NOT available on HTML5.  Second, if you are recording in Stereo, you need to double the size of your array.  The data in the array for a stereo waveform is interleaved.  For example, the first byte in the array is the very first float of the left sound channel, then the next float is the first value in the right channel, the next float is the second float of the left sound channel, and so on.

Programming , ,




Vacation time! See you early next week

13. November 2013

 

Dominican-Republic-beach

 

As you may be able to tell by the title… or the picture above, I am about to depart to somewhere a heck of a lot warmer than Toronto currently is!  It’s only a short vacation though, so expect posts to resume early next week.

 

Until then, see you at the beach!

Totally Off Topic




Unity 4.3 is finally released… yes, this is the one with 2D

12. November 2013

 

I know this is a release that many people have been waiting for, the Unity version with out of the box support for 2D.

 

So, what’s in this release:

  • 2D, including tools for making sprites, scene population, physics and animation. Pro version includes automatic texture atlas generation and alpha cut outs.
  • Mecanim improvements. New keyframe based dopesheet, drive blend shapes, and perhaps most import, full scriptability of the animation system.
  • MonoDevelop 4.0.1 support.  MonoDevelop 4 is loads better, so this is nice.
  • iOS new features include Game Controller support, OpenGL ES3
  • Windows 8 support for the trial api for try before buy enabling your game
  • Support for Plastic SCM version control software
  • Pro Only feature – nav mesh can now be altered at runtime
  • A lot of smaller improvements

 

You can see the new 2D workflow in the video below:

 

You can download Unity 4.3 right here for free.

News ,




LibGDX Tutorial 7: Camera basics

6. November 2013

 

Now we are going to look quickly at using a camera, something we haven’t used in any of the prior tutorials.  Using a camera has a couple of advantages.  It gives you an easier way of dealing with device resolution as LibGDX will scale the results up to match your device resolution.  It also makes moving the view around when your scene is larger than a single screen.  That is exactly what we are going to do in the code example below.

 

I am using a large ( 2048x1024 ) image that I obtained here.

 

Alright, now the code:

package com.gamefromscratch;

import com.badlogic.gdx.ApplicationListener;

import com.badlogic.gdx.Gdx;

import com.badlogic.gdx.graphics.GL10;

import com.badlogic.gdx.graphics.OrthographicCamera;

import com.badlogic.gdx.graphics.Texture;

import com.badlogic.gdx.graphics.Texture.TextureFilter;

import com.badlogic.gdx.graphics.g2d.Sprite;

import com.badlogic.gdx.graphics.g2d.SpriteBatch;

import com.badlogic.gdx.input.GestureDetector;

import com.badlogic.gdx.input.GestureDetector.GestureListener;

import com.badlogic.gdx.math.Vector2;

public class CameraDemo implements ApplicationListener, GestureListener {

private OrthographicCamera camera;

private SpriteBatch batch;

private Texture texture;

private Sprite sprite;

@Override

public void create() {

   camera = new OrthographicCamera(1280, 720);

   batch = new SpriteBatch();

   texture = new Texture(Gdx.files.internal("data/Toronto2048wide.jpg"));

   texture.setFilter(TextureFilter.Linear, TextureFilter.Linear);

   sprite = new Sprite(texture);

   sprite.setOrigin(0,0);

   sprite.setPosition(-sprite.getWidth()/2,-sprite.getHeight()/2);

   Gdx.input.setInputProcessor(new GestureDetector(this));

}

@Override

public void dispose() {

   batch.dispose();

   texture.dispose();

}

@Override

public void render() {

   Gdx.gl.glClearColor(1, 1, 1, 1);

   Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT);

   batch.setProjectionMatrix(camera.combined);

   batch.begin();

   sprite.draw(batch);

   batch.end();

}

@Override

public void resize(int width, int height) {

}

@Override

public void pause() {

}

@Override

public void resume() {

}

@Override

public boolean touchDown(float x, float y, int pointer, int button) {

// TODO Auto-generated method stub

return false;

}

@Override

public boolean tap(float x, float y, int count, int button) {

// TODO Auto-generated method stub

return false;

}

@Override

public boolean longPress(float x, float y) {

// TODO Auto-generated method stub

return false;

}

@Override

public boolean fling(float velocityX, float velocityY, int button) {

// TODO Auto-generated method stub

return false;

}

@Override

public boolean pan(float x, float y, float deltaX, float deltaY) {

   // TODO Auto-generated method stub

   camera.translate(deltaX,0);

   camera.update();

   return false;

}

@Override

public boolean zoom(float initialDistance, float distance) {

// TODO Auto-generated method stub

return false;

}

@Override

public boolean pinch(Vector2 initialPointer1, Vector2 initialPointer2,

Vector2 pointer1, Vector2 pointer2) {

// TODO Auto-generated method stub

return false;

}

}

 

Additionally in Main.java I changed the resolution to 720p like so:

package com.gamefromscratch;

import com.badlogic.gdx.backends.lwjgl.LwjglApplication;

import com.badlogic.gdx.backends.lwjgl.LwjglApplicationConfiguration;

public class Main {

   public static void main(String[] args) {

      LwjglApplicationConfiguration cfg = new LwjglApplicationConfiguration();

      cfg.title = "camera";

      cfg.useGL20 = false;

      cfg.width = 1280;

      cfg.height = 720;

      new LwjglApplication(new CameraDemo(), cfg);

   }

}

When you run it you will see:

 

SS

 

 

 

Other then being an image of my cities skyline, its pan-able. You can swipe left or right to pan the image around.

 

The code is mostly familiar at this point, but the important new line is:

camera = new OrthographicCamera(1280, 720);

This is where we create the camera.  There are two kinds of cameras in LibGDX, Orthographic and Perspective.  Basically an orthographic camera renders what is in the scene exactly the size it is.  A perspective camera on the other hand emulates the way the human eye works, by rendering objects slightly smaller as they get further away.  Here is an example from my Blender tutorial series.

 

Perspective:

Perspective

Orthographic:

Orthographic

 

Notice how the far wing is smaller in the perspective render?  That’s what perspective rendering does for you.  In 2D rendering however, 99 times out of 100 you want to use Orthographic.

 

The values passed to the constructor are the resolution of the camera, the width and height.  In this particular case I chose to use pixels for my resolution, as I wanted to have the rendering at 1280x720 pixels.  You however do not have to… if you are using physics and want to use real world units for example, you could have gone with meters, or whatever you want.  The key thing is that your aspect ratio is correct.  The rest of the code in create() is about loading our image and positioning it about the origin in the world.  Finally we wire up our gesture handler so we can pan/swipe left and right on the image.

 

The next important call is in render():

batch.setProjectionMatrix(camera.combined);

This ties our LibGDX camera object to the OpenGL renderer.  The OpenGL rendering process depends on a number of matrix to properly translate from the scene or world to screen coordinates during rendering.  camera.combined returns the camera’s view and projection matrixes multiplied together.  If you want more information about the math behind the scenes you can read here.  Of course, the entire point of the Camera classes is so you don’t have to worry about this stuff, so if you find it confusing, don’t sweat it, LibGDX takes care of the math for you. 

Finally in our pan handler ( huh? ) we have the following code:

camera.translate(deltaX,0);

camera.update();

 

You can use translate to move the camera around. Here we move the camera along the X axis by the amount the user swiped. This causes the view of the image to move as the user swipes the screen/pans the mouse. Once you are done modifying the camera, you need to update it. Without calling update() the camera would never move.

There are a number of neat functions in the camera that we don’t use here.  There are functions to look at a point in space, to rotate or even rotate around ( orbit ) a vector.  There are also functions for projecting to and from screen to world space as well as code for ray casting into the scene.  In a straight 2D game though you generally won’t use a lot of this functionality.  We may take a closer look at the camera class later on when we jump to 3D.

Programming , ,