Subscribe to GameFromScratch on YouTube Support GameFromScratch on Patreon
5. September 2014

 

 

Other than being a word my brain seems to think is spelled incorrectly, Particles are a common part of many games.  Fortunately Phaser has pretty good support for particles.  In this tutorial we are going to take a look at how they are used.

 

First, a quick introduction to particles.  A particle system is simply a mass of things, namely particles.  A particle is often ( as in this case ) a simple sprite.  The entire idea behind a particle system is to control a large number of particles using a single controller.  Particles are commonly used to create effects like fire and smoke, where each dozens, hundreds or thousands of smoke and flame particles are emitted ( pushed ) from the particle emitter.  As the emitter moves, so do the particles emitted from it.  This allows you to define over all behavior in the emitter, not each particle.  If that makes no sense, don’t worry, the examples below should clear it up.

 

Let’s take a look at a very basic emitter.  First we need a particle graphic.  In this case I am using the following graphic:

 

particle

 

It’s a simple transparent png image named particle.png.  Now the code:

 

/// <reference path="phaser.d.ts"/>

class SimpleGame {
    game: Phaser.Game;
    sound: Phaser.Sound;
    emitter: Phaser.Particles.Arcade.Emitter;

    constructor() {
        this.game = new Phaser.Game(640, 480, Phaser.AUTO, 'content', { create: 
                    this.create, preload: this.preload});
    }
    preload() {
        // Load the image we are going to use for our particle
        this.game.load.image("particle", "particle.png");
    }
    create() {
        // Now we are creating the particle emitter, centered to the world
        this.emitter = this.game.add.emitter(this.game.world.centerX, this.game.
                       world.centerY);
        // Make the particles for the emitter to emmit.  We use the key for the 
        particle graphic we loaded earlier
        // We want 500 particles total
        this.emitter.makeParticles('particle', 1, 500, false, false);
        // And BOOM, emit all 500 particles at once with a life span of 10 
        seconds.
        this.emitter.explode(10000, 500);
    }
}

window.onload = () => {
    var game = new SimpleGame();
};

 

Now when you run this code:

 

p1

 

500 particles using our png texture “explode” out of the emitter all at once.  The code is fairly heavily commented to explain what exactly is going on so I am going to move on to the next example.

 

Some ( most? ) of the time, you arent going to want all of your particles exploding all at once, except perhaps if you are modeling an explosion that is, instead you often want them to stream over time, perhaps from a moving source.  Also, you often want to use more than a single particle graphic for variety.  Finally, you often want those particles to physically interact with the world.  This example is going to do all of this.

 

This time we are going to use multiple (poorly edited by me) image files for our particles:

particleparticle2particle3

 

And now the code:

 

/// <reference path="phaser.d.ts"/>

class SimpleGame {
    game: Phaser.Game;
    sound: Phaser.Sound;
    emitter: Phaser.Particles.Arcade.Emitter;
    sprite: Phaser.Sprite;

    constructor() {
        this.game = new Phaser.Game(640, 480, Phaser.AUTO, 'content', {
            create: this.create, preload: this.preload,
            update: this.update
        });
    }
    preload() {
        // This time we have 3 different particle graphics to use
        this.game.load.image("particle", "particle.png");
        this.game.load.image("particle2", "particle2.png");
        this.game.load.image("particle3", "particle3.png");

        this.game.load.image("logo", "gfslogo.png");
    }
    update() {
        // This checks for and handles collisions between our sprite and 
        particles from the emitter
        this.game.physics.arcade.collide(this.sprite, this.emitter);
    }
    create() {
        var start = new Date().getTime();
        for (var i = 0; i < 1e7; i++) {
            if ((new Date().getTime() - start) > 10000) {
                break;
            }
        }
        this.emitter = this.game.add.emitter(0, 0);
        // As you can see, you can pass in an array of keys to particle graphics
        this.emitter.makeParticles(['particle', 'particle2', 'particle3'], 1, 
                                   500, false, false);
        // Instead of exploding, you can also release particles as a stream
        // This one lasts 10 seconds, releases 20 particles of a total of 500
        // Which of the 3 graphics will be randomly chosen by the emitter
        this.emitter.start(false, 10000, 20, 500, true);

        // This line of code illustrates that you can move a particle emitter.  
        In this case, left to right across
        // The top of the screen.  Ignore details tweening for now if it's new, 
        we will discuss later
        this.game.add.tween(this.emitter).to({ x: this.game.width }, 10000,null,
                            true,0,1,true).start();

        // Let's create a sprite in the center of the screen
        this.sprite = this.game.add.sprite((this.game.width / 2) - 100, (this.
                      game.height / 2) - 100, "logo");
        // We want it to be part of the physics of the world to give something 
        for particles to respond to
        // Note, we will cover physics in more detail later ( after tweening 
                                                              perhaps... ;) )
        this.game.physics.enable(this.sprite);
        this.sprite.body.immovable = true;
    }
}

window.onload = () => {
    var game = new SimpleGame();
};

 

And run it:

 

p2

 

As you can see, you can provide multiple particles, move the emitter and have it physically interact with the scene.  Don’t worry over much about tweening or physics, these are two subjects we will cover later in more detail.

 

In these two examples, we’ve looked at particles like you would use for special effects work, a very common situation.  That said, Particles can actually be used to control a large number of game entities that have common over all behavior, but randomized location, rotation, size or scale.  Take for example, a flight of animated birds.  You might want to add birds flying in the background of your game.  Let’s take a look at how to do this!

 

First I am using (a full sized version) of this sprite sheet taken from here.

robin-782x1024

 

It’s an animated sequence of a robin in flight.  It contains 22 frames, each 240x314 pixels in size.

 

Now let’s take a look at the code:

 

/// <reference path="phaser.d.ts"/>


// Create our own Particle class
class MyParticle extends Phaser.Particle {
    elapsedTime: number;
    currentFrame: number;
    static MAX_FRAME: number = 22;
    game: Phaser.Game;

    // In the constructor, randomly pick a starting frame of animation so all 
    the 
    // birds arent animated at the same rate
    constructor(game: Phaser.Game, x: number, y: number, key?: any, frame?: any) 
                {
        this.currentFrame = Math.floor(Math.random() * MyParticle.MAX_FRAME);
        this.elapsedTime = 0;
        this.game = game;

        // Now call Particle's constructor, passing in the spritesheet and frame 
        to use initially
        super(game, x, y, "ss", this.currentFrame);
    }

    update() {
        super.update();

        // Ever 100ms move to the next frame of animation
        this.elapsedTime += this.game.time.elapsed;
        if (this.elapsedTime >= 100) {
            this.currentFrame++;
            if (this.currentFrame > MyParticle.MAX_FRAME) this.currentFrame = 0;
            this.frame = this.currentFrame;
            this.elapsedTime = 0;
        }
    }
}

class SimpleGame {
    game: Phaser.Game;
    sound: Phaser.Sound;
    emitter: Phaser.Particles.Arcade.Emitter;

    constructor() {
        this.game = new Phaser.Game(640, 480, Phaser.AUTO, 'content', {
            create: this.create, preload:
            this.preload, render: this.render
        });
    }
    preload() {
        // Load the spritesheet containing the frames of animation of our bird
        // The cells of animation are 240x314 and there are 22 of them
        this.game.load.spritesheet("ss", "robin.png", 240, 314, 22);
    }
    render() {
    }
    create() {
        // Now we are creating the particle emitter, centered to the world
        this.emitter = this.game.add.emitter(this.game.world.centerX, this.game.
                       world.centerY);

        this.emitter.particleClass = MyParticle;
        this.emitter.makeParticles();
        
        // In this case, we dont want gravity affecting the birds and we dont 
        want them to rotate
        this.emitter.gravity = 0;
        this.emitter.minRotation = 0;
        this.emitter.maxRotation = 0;

        // Now start emitting particles.  Total life of 1000 seconds, create one 
        per 500ms ( 2/sec )
        // and create a total of 100 birds.
        this.emitter.start(false, 100000, 500, 100, true);
    }
}

window.onload = () => {
    var game = new SimpleGame();
};

 

And when run:

 

p3

 

The terrible graphics are a side effect of being converted to an animated GIF that would take forever to download!

 

It just dawned on me that I haven’t actually covered Spritesheets yet… oops.  Hey, this is how you use a spritesheet! :)  Don’t worry if you struggle with the spritesheet portions, we will cover that later as well.  The important part here is the Particle itself.  In this example we extend the class Particle to create our own Particle class.  This class then handles updating itself resulting in the animation of the bird.  Obviously you could put a great deal more logic in there, although be sure to keep things pretty light computationally, especially if you intend to create a number of them.

 

This example actually illustrates a couple of really nice features of TypeScript, but maps nicely to Phaser’s JavaScript.  The first in the concept of a constructor.  This is code that is called as the object is created.  The next more important concept is class and extend.  In this case we are extending an existing JavaScript class, Particle.  A quick look behind the curtains now.

 

In TypeScript we write:

 

class MyParticle extends Phaser.Particle

 

This results in the JavaScript

 

var __extends = this.__extends || function (d, b) {
    for (var p in b) if (b.hasOwnProperty(p)) d[p] = b[p];
    function __() { this.constructor = d; }
    __.prototype = b.prototype;
    d.prototype = new __();
};
// Create our own Particle class
var MyParticle = (function (_super) {
    __extends(MyParticle, _super);

 

Pretty cool stuff, and certainly easier to read and maintain.

 

A Small Bug

 

While writing this I ran into a small bug I figured I should make you aware of ( and one of the downsides of working with TypeScript ).  In the code sample we had the following line:

 

this.emitter.particleClass = MyParticle;

 

This line is specifying the type of class to use for a Particle, no an instance of the class.  Unfortunately the Phaser.d.ts file is incorrect in it’s definition.  It instead expects particleClass to receive a Phaser.Particle instance.  This is wrong ( and took some time to route out ).  In the definition, if not fixed by the time you read this, make the following edit to Emitter:

 

particleSendToBack: boolean;
// MJF Fix particleClass: Phaser.Sprite;
particleClass: any;

 

Hopefully that will be fixed by the time you read this.

 

Programming


27. August 2014

 

Today we are going to look at implementing physics in LibGDX.  This technically isn’t part of LibGDX itself, but instead is implemented as an extension.  The physics engine used in LibGDX is the popular Box2D physics system a library that has been ported to basically every single platform and language ever invented, or so it seems. We are going to cover how to implement Box2D physics in your 2D LibGDX game.  This is a complex subject so will require multiple parts.

 

If you’ve never used a Physics Engine before, we should start with a basic overview of what they do and how they work.  Essentially a physics engine takes scene information that you provide, then calculates “realistic” movement using physics calculations.  It goes something like this:

  • you describe all of the physics entities in your world to the physics engine, including bounding volumes, mass, velocity, etc
  • you tell the engine to update, either per frame or on some other interval
  • the physics engine calculates how the world has changed, what’s collided with what, how much gravity effects each item, current speed and position, etc
  • you take the results of the physics simulation and update your world accordingly.

 

Don’t worry, we will look at exactly how in a moment.

 

First we need to talk for a moment about creating your project.  Since Box2D is now implemented as an extension ( an optional LibGDX component ), you need to add it either manually or when you create your initial project.  Adding a library to an existing project is IDE dependent, so I am instead going to look at adding it during project creation… and totally not just because it’s really easy that way.

 

When you create your LibGDX project using the Project Generator, you simply specify which extensions you wish to include and Gradle does the rest.  In this case you simply check the box next to Box2d when generating your project like normal:

 

image

 

… and you are done.  You may be asking, hey what about Box2dlights?  Nope, you don’t currently need that one.  Box2dlights is a project for simulating lighting and shadows based off the Box2d physics engine.  You may notice in that list another entity named Bullet.  Bullet is another physic engine, although more commonly geared towards 3D games, possibly more on that at a later date.  Just be aware if you are working in 3D, Box2d isn’t of much use to you, but there are alternatives.

 

Ok, now that we have a properly configured project, let’s take a look at a very basic physics simulation.  We are simply going to take the default LibGDX graphic and apply gravity to it, about the simplest simulation you can make that actually does something.  Code time!

 

package com.gamefromscratch;

import com.badlogic.gdx.ApplicationAdapter;
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.graphics.GL20;
import com.badlogic.gdx.graphics.Texture;
import com.badlogic.gdx.graphics.g2d.Sprite;
import com.badlogic.gdx.graphics.g2d.SpriteBatch;
import com.badlogic.gdx.math.Vector2;
import com.badlogic.gdx.physics.box2d.*;

public class Physics1 extends ApplicationAdapter {
    SpriteBatch batch;
    Sprite sprite;
    Texture img;
    World world;
    Body body;

    @Override
    public void create() {

        batch = new SpriteBatch();
        // We will use the default LibGdx logo for this example, but we need a 
        sprite since it's going to move
        img = new Texture("badlogic.jpg");
        sprite = new Sprite(img);

        // Center the sprite in the top/middle of the screen
        sprite.setPosition(Gdx.graphics.getWidth() / 2 - sprite.getWidth() / 2,
                Gdx.graphics.getHeight() / 2);

        // Create a physics world, the heart of the simulation.  The Vector 
        passed in is gravity
        world = new World(new Vector2(0, -98f), true);

        // Now create a BodyDefinition.  This defines the physics objects type 
        and position in the simulation
        BodyDef bodyDef = new BodyDef();
        bodyDef.type = BodyDef.BodyType.DynamicBody;
        // We are going to use 1 to 1 dimensions.  Meaning 1 in physics engine 
        is 1 pixel
        // Set our body to the same position as our sprite
        bodyDef.position.set(sprite.getX(), sprite.getY());

        // Create a body in the world using our definition
        body = world.createBody(bodyDef);

        // Now define the dimensions of the physics shape
        PolygonShape shape = new PolygonShape();
        // We are a box, so this makes sense, no?
        // Basically set the physics polygon to a box with the same dimensions 
        as our sprite
        shape.setAsBox(sprite.getWidth()/2, sprite.getHeight()/2);

        // FixtureDef is a confusing expression for physical properties
        // Basically this is where you, in addition to defining the shape of the 
        body
        // you also define it's properties like density, restitution and others 
        we will see shortly
        // If you are wondering, density and area are used to calculate over all 
        mass
        FixtureDef fixtureDef = new FixtureDef();
        fixtureDef.shape = shape;
        fixtureDef.density = 1f;

        Fixture fixture = body.createFixture(fixtureDef);

        // Shape is the only disposable of the lot, so get rid of it
        shape.dispose();
    }

    @Override
    public void render() {

        // Advance the world, by the amount of time that has elapsed since the 
        last frame
        // Generally in a real game, dont do this in the render loop, as you are 
        tying the physics
        // update rate to the frame rate, and vice versa
        world.step(Gdx.graphics.getDeltaTime(), 6, 2);

        // Now update the spritee position accordingly to it's now updated 
        Physics body
        sprite.setPosition(body.getPosition().x, body.getPosition().y);

        // You know the rest...
        Gdx.gl.glClearColor(1, 1, 1, 1);
        Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
        batch.begin();
        batch.draw(sprite, sprite.getX(), sprite.getY());
        batch.end();
    }

    @Override
    public void dispose() {
        // Hey, I actually did some clean up in a code sample!
        img.dispose();
        world.dispose();
    }
}

 

The program running:

 

test

 

What's  going on here is mostly defined in the comments, but I will give a simpler overview in English.  Basically when using a Physics Engine, you create a physical representation for each corresponding object in your game.  In this case we created a physics object ( Body ) that went along with our sprite.  It’s important to realize, there is no actual relationship between these two objects.  There are a couple of components that go into a physics body, BodyDef which defines what type of body it is ( more on this later, for now realize DynamicBody means a body that is updated and capable of movement ) and FixtureDef, which defines the shape and physical properties of the Body.  Of course, there is also the World, which is the actual physics simulation.

 

So, basically we created a Body which is the physical representation of our Sprite in the physics simulation.  Then in render() we call the incredibly important step() method.  Step is what advances the physics simulation… basically think of it as the play button.  The physics engine then calculations all the various mathematics that have changes since the last call to step.  The first value we pass in is the amount of time that has elapsed since the last update.  The next two values control the amount of accuracy in contact/joint calculations for velocity and position. Basically the higher the values the more accurate your physics simulation will be, but the more CPU intensive as well.  Why 6 and 2?  ‘cause that’s what the LibGDX site recommend and that works for me.  At the end of the day these are values you can tweak to your individual game.  The one other critical take away here is we update the sprites position to match the newly updated body’s position.  Once again, in this example, there is no actual link between a physics body and a sprite, so you have to do it yourself.

 

There you go, the worlds simplest physics simulation.  There are a few quick topics to discuss before we move on.  First, units.

 

This is an important and sometimes tricky concept to get your head around with physics systems.  What does 1 mean?  One what?  The answer is, whatever the hell you want it to be, just be consistent about it!  In this particular case I used pixels.  Therefore 1 unit in the physics engine represents 1 pixel on the screen.  So when I said gravity is (0,-98) that means gravity is applied at a rate of –98 pixels along the y axis per second.  Just as commonly, 1 in the physics engine could be meters, feet, kilometer, etc… then you use a custom ratio for translating to and from screen coordinates.  Most physics systems, Box2d included, really don’t like you mixing your scales however.  For example, if you have a universe simulation where 1 == 100 miles, then you want to calculate the movement of an Ant at 0.0000001 x 100miles per hour, you will break the simulation, hard.  Find a scale that works well with the majority of your game and stick with it.  Extremely large and extremely small values within that simulation will cause problems.

 

Finally, a bit of a warning about how I implemented this demo and hopefully something I will cover properly at a later date.  In this case I updated the physics system in the render loop.  This is a possibility but generally wasteful.  It’s fairly common to run your physics simulation at a fixed rate ( 30hz and 60hz being two of the most common, but lower is also a possibility if processing restrained ) and your render loop as fast as possible.

 

In the next part we will give our object something to collide with, stay tuned.

Programming


11. August 2014

 

 

In the previous tutorial, which I will admit… some time has elapsed since I wrote it, sorry about that… anyways, in the last tutorial we looked at handling keyboard input using Phaser.  In this tutorial we are going to look at handling mouse and touch events.  Just like the last part, this one is going to be relatively light on explanation and high on code.

 

Ok, let’s jump right in.  The first example shows how to poll for mouse/touch information:

 

/// <reference path="phaser.d.ts"/>

// This code demonstrates polling for touch or mouse clicks
class SimpleGame {
    game: Phaser.Game;
    jetSprite: Phaser.Sprite;

    constructor() {
        this.game = new Phaser.Game(640, 480, Phaser.AUTO, 'content', {
            create: this.create, preload: this.preload, update: this.update
        });
    }

    preload() {
        var loader = this.game.load.image("jet", "jet.png");
    }

    create() {
        var image = <Phaser.Image>this.game.cache.getImage("jet");
        this.jetSprite = this.game.add.sprite(
            this.game.width / 2 - image.width / 2,
            this.game.height / 2 - image.height / 2,
            "jet");

        this.jetSprite.pivot.x = this.jetSprite.width / 2;
        this.jetSprite.pivot.y = this.jetSprite.height / 2;
    }

    update() {

        // You can poll mouse status
        if (this.game.input.activePointer.isDown) {
            // This will be set to true if any mouse button is pressed or a finger is touched
            this.jetSprite.position.set(this.game.input.mousePointer.x,
                this.game.input.mousePointer.y);

            // You can test if the user is using a mouse
            if (this.game.input.activePointer.isMouse) {
                // In the case of a mouse, you can check mouse button status
                if (this.game.input.activePointer.button == Phaser.Mouse.RIGHT_BUTTON) {

                    // We just want to clear it, so this doesnt fire over and over, dont do this in production
                    this.game.input.activePointer.reset();
                    alert("Right button pressed");
                }
            }
            else {
                // On the other hand, if you are dealing with touch, you can have multiple touches/pointers
                // By default there are two pointers defined, so you can have up to two touches.
                // You can add more pointers using input.addPointer();
                if (this.game.input.pointer1.isDown && this.game.input.pointer2.isDown)
                    alert("Multitouch!");
            }
        }
    }
}

window.onload = () => {
    var game = new SimpleGame();
};

 

 

Pointer is an abstraction for both touch and mouse.  As you can see above, there are multiple pointers ( input.pointer1, input.pointer2, etc… ) but by default only two will be enabled.  If you want to handle multitouch with more than 2 fingers you need to register more using addPointer().   You can add up to 10 pointers total.  pointer1 is the first finger to touch the screen, pointer2 the second, etc.

 

Now let’s take a look at handling mouse input using callbacks instead of polling:

 

/// <reference path="phaser.d.ts"/>

// This code demonstrates polling for touch or mouse clicks
class SimpleGame {
    game: Phaser.Game;
    jetSprite: Phaser.Sprite;

    constructor() {
        this.game = new Phaser.Game(640, 480, Phaser.AUTO, 'content', {
            create: this.create, preload: this.preload
        });
    }

    preload() {
        var loader = this.game.load.image("jet", "jet.png");
    }

    create() {
        var image = <Phaser.Image>this.game.cache.getImage("jet");
        this.jetSprite = this.game.add.sprite(
            this.game.width / 2 - image.width / 2,
            this.game.height / 2 - image.height / 2,
            "jet");

        this.jetSprite.pivot.x = this.jetSprite.width / 2;
        this.jetSprite.pivot.y = this.jetSprite.height / 2;

        // You can handle mouse input by registering a callback as well
        // The following registers a callback that will be called each time the mouse is moved
        this.game.input.moveCallback = (pointer:Phaser.Pointer,x:number,y:number) => {
            this.jetSprite.position.set(x, y);
        }

        // This one registers a mouse click handler that will be called
        this.game.input.onDown.add(SimpleGame.prototype.mouseDown);

    }

    mouseDown(event:MouseEvent) {
        alert("Mouse is down " + event.button);
    }

}

window.onload = () => {
    var game = new SimpleGame();
};

 

 

In this case we register a function that will be called whenever the mouse moves.  In this case I am using an anonymous delegate.  You can also handle mouse down or touch events.  The format is slightly different as onDown is a Signal instead of a callback.  That said, a Signal is basically a wrapper around a callback function, so it’s not really all that different.  Except of course with Signals, you can register multiple for the same event.

 

Finally, in the previous two examples, we handled the input at an application level.  If it makes sense for your game, you can actually handle input at the Sprite level.  Like so:

 

/// <reference path="phaser.d.ts"/>

// In this example we illustrate input events can be applied to Sprites
class SimpleGame {
    game: Phaser.Game;
    jetSprite: Phaser.Sprite;

    constructor() {
        this.game = new Phaser.Game(640, 480, Phaser.AUTO, 'content', {
            create: this.create, preload: this.preload
        });
    }

    preload() {
        var loader = this.game.load.image("jet", "jet.png");
    }

    create() {
        var image = <Phaser.Image>this.game.cache.getImage("jet");
        this.jetSprite = this.game.add.sprite(
            this.game.width / 2 - image.width / 2,
            this.game.height / 2 - image.height / 2,
            "jet");

        this.jetSprite.pivot.x = this.jetSprite.width / 2;
        this.jetSprite.pivot.y = this.jetSprite.height / 2;

        // First enable the sprite to receive input
        this.jetSprite.inputEnabled = true;
        
        // Then add an event handler for input over
        this.jetSprite.events.onInputOver.add(() => {
            alert("The mouse passed over the sprite!");
        });
    }


}

window.onload = () => {
    var game = new SimpleGame();
};

 

 

First thing you need to do is tell Phaser that your Sprite will handle input by setting inputEnabled to true.  Next we add an input handler for onInputOver, which once again is a Signal.  There are a number of events that Sprite can respond to, all predictably enough in the events class.

 

As a side note, part of the delay in updating this tutorial series is I was actually intending to write one about using GamePads.  First I started writing code and my XBox 360 wireless receiver died horribly.  Apparently this is a very common problem with them, a tiny little fragile fuse broke.  I then ordered a replacement from Amazon and it took three weeks to arrive.  Then when it did arrive, well…

 

Let’s just say Gamepad support in HTML5 is complete and utter garbage.  There are a bunch of projects in the works, but they are all a far way away from being actually usable.  If you are looking to add gamepad support to your HTML5 project, expect it to be very browser specific.  By this I actually mean exact version of the browser…  The APIs are changing constantly and are breaking just as often.  In my opinion, it’s simply not worth it at this point.

 

Programming


25. July 2014

 

Now we are going to look at using Physics in our Spritekit based game.  If you’ve worked with a physics engine before, a lot of this is going to feel very familiar.  One of the key differences from many other engines is SpriteKit handles updating the graphics as well as the physics.  It’s a fairly involved process, so I am going to split this across multiple posts.

 

The first one is going to be short and sweet.  We are going to configure the physics of a sphere, a SKShapeNode.  It is simply going to exist and let gravity take it’s course.  Code time:

 

import SpriteKit

 

class GameScene: SKScene {

    override func didMoveToView(view: SKView) {

        view.scene!.anchorPoint = CGPoint(x: 0.5,y: 0.5)

        

        var shapeNode = SKShapeNode();

        var path = CGPathCreateMutable();

        CGPathAddArc(path, nil, 0, view.bounds.height/2, 45, 0, M_PI*2, true);

        shapeNode.path = path;

        shapeNode.lineWidth = 2.0;

        

        shapeNode.physicsBody = SKPhysicsBody(circleOfRadius: 45.0);

        shapeNode.physicsBody.dynamic = true;

        

        // Make gravity "fall" at 1 unit per second along the y-axis

        self.physicsWorld.gravity.dy = -1;

        self.addChild(shapeNode);

    }

}

 

And once run:

 

 

Well, that was simple enough, let’s take a quick run through the code.

 

We start off by creating an SKShapeNode.  This shape node is defined by a CGPath.  You can think of a CGPath as a set of drawing instructions.  In this case it consists of a single arc that draws a circle.    Once our path is created we set it to be our shapeNodes path.  We set the lineWidth to 2 to make it a bit more visible.

 

Next we define the physicsBody.  Every SKNode has a SKPhysicsBody.  The SKPhysicsBody is the nodes’ representation in the physics engine.  When defining a physics body you pick the shape that most matches your underlying shape.  In this case it’s a no brainer to use a circle.  There are constructors available for rectangles, polygons, edges, etc.  Of all the options however, the circle is the least processing intensive.  So if you need just a rough physical representation, prefer the more simple types ( circle, then rectangle ) and leave the more complex types until required.  The key thing here is the dynamic property.  This is what tells SpriteKit to include your SKNode in the physics calculation.  If this isn’t set, nothing is going to happen!

 

Finally we set the gravity value dy to -1.  This means gravity moves by a delta of -1 unit per second on the y axis.  Notice the concept of “unit” here, it is very import.  When dealing with SpriteKit ( and many other Physics engines ), a unit is completely arbitrary.  So you may ask “1 what?”.  The answer is, 1 whatever… just be consistent about it.  In your code you could choose 1 to be a millimetre, an inch, a foot, a lightyear.  A lot of it comes down to the type of game you are working on.  One thing to keep in mind here though, massive ranges in value are not a good thing…  so don’t mix units.  For example, trying to represent something in millimetres and kilometres in the same simulation is a sure recipe for disaster!

 

Now let’s give our ball something to collide with… that being the edge of the window.  While we are at it, let’s add a bit of bounce to our step:

 

import SpriteKit

 

class GameScene: SKScene {

    override func didMoveToView(view: SKView) {

 

        //Create the ball

        var shapeNode = SKShapeNode();

        var path = CGPathCreateMutable();

        CGPathAddArc(path, nil, 0, 0, 45, 0, M_PI*2, true);

        CGPathCloseSubpath(path);

        shapeNode.path = path;

        shapeNode.lineWidth = 2.0;

        shapeNode.position = CGPoint(x:self.view.frame.width/2,y:self.view.frame.height);

        

        // Set the ball's physical properties

        shapeNode.physicsBody = SKPhysicsBody(circleOfRadius: shapeNode.frame.width/2);

        shapeNode.physicsBody.dynamic = true;

        shapeNode.physicsBody.mass = 1;

        shapeNode.physicsBody.friction = 0.2;

        shapeNode.physicsBody.restitution = 1;

        

 

        // Now make the edges of the screen a physics object as well

        scene.physicsBody = SKPhysicsBody(edgeLoopFromRect: view.frame);

        

        // Make gravity "fall" at 1 unit per second along the y-axis

        self.physicsWorld.gravity.dy = -1;

 

        self.addChild(shapeNode);

    }

}

 

And run this:

 

 

 

Not, the jerkiness you see isn’t from the physics, but instead the animated gif.  Getting that to encode to a reasonable size was oddly a bit of a battle.

 

The key difference here is first of all, we set the edges of the screen as a physics object for our ball to collide against.  A key thing to remember with SpriteKit, every thing is a SKNode, even the scene itself!

 

Next we set a couple key physics properties for our ball, mass, friction and restitution.  Mass is the weight of the object… going back to the age old question, what falls faster… a ton of feathers, or a ton of bricks?  Friction is how two surfaces react to each other, generally used when simulating “sliding” and is fairly irrelevant in this example.  Restitution is the key value.  For lack of a better explanation, restitution is the bounciness of the objet.  Lower value, less bounce, higher value, higher bounce.  A value higher than 1 will result in an object actually gaining momentum after a collision ( aka, going higher UP then it fell from before bouncing ).

 

Next up we will work on actual collisions.

Programming


8. July 2014

 

In this part of the LibGDX tutorial series we are going to take a look at using GLSL shaders.  GLSL standards for OpenGL Shader Language and since the move from a fixed to programmable graphics pipeline, Shader programming has become incredibly important.  In fact, every single thing rendered with OpenGL has at least a pair of shaders attached to it.  It’s been pretty transparent to you till this point because LibGDX mostly takes care of everything for you.  When you create a SpriteBatch object in LibGDX, it automatically creates a default vertex and fragment shader for you.  If you want more information on working with GLSL I put together the OpenGL Shader Programming Resource Round-up back in May.  It has all the information you should need to get up to speed with GLSL.  For more information on OpenGL in general, I also created this guide.

 

Render Pipeline Overview

 

To better understand the role of GL shaders, it’s good to have a basic understanding of how the modern graphics pipeline works.  This is the high level description I gave in PlayStation Mobile book, it’s not plagiarism because I’m the author. :)

 

A top-level view of how rendering occurs might help you understand the shader process. It all starts with the shader program, vertex buffers, texture coordinates, and so on being passed in to the graphics device. Then this information is sent off to a vertex shader, which can then transform that vertex, do lighting calculations and more (we will see this process shortly). The vertex shader is executed once for every vertex and a number of different values can be output from this process (these are the out attributes we saw in the shader earlier). Next the results are transformed, culled, and clipped to the screen, discarding anything that is not visible, then rasterized, which is the process of converting from vector graphics to pixel graphics, something that can be drawn to the screen.

The results of this process are fragments, which you can think of as "prospective pixels," and the fragment are passed in to the fragment shader. This is why they are called fragment shaders instead of pixel shaders, although people commonly refer to them using either expression. Once again, the fragment shader is executed once for each fragment. A fragment shader, unlike a vertex shader, can only return a single attribute, which is the RGBA color of the individual pixel. In the end, this is the value that will be displayed on the screen. It sounds like a horribly complex process, but the GPUs have dedicated hardware for performing exactly such operations, millions upon millions of times per second. That description also glossed over about a million tiny details, but that is the gist of how the process occurs.

 

So basically shaders are little programs that run over and over again on the data in your scene.  A vertex shader works on the vertices in your scene ( predictably enough… ) and are responsible for positioning each vertex in the world.  Generally this is a matter of transforming them using some kind of Matrix passed in from your program.  The output of the Vertex shader is ultimately passed to a Fragment shader.  Fragment shaders are basically, as I said above, prospective pixels.  These are the actual coloured dots that are going to be drawn on the users screen.  In the fragment shader you determine how this pixel will appear.  So basically a vertex shader is a little C-like program that is run for each vertex in your scene, while a fragment shader is run for each potential pixel.

 

There is one very important point to pause on here…  Fragment and Vertex shaders aren’t the only shaders in the modern graphics pipeline.  There are also Geometry shaders.  While vertex shaders can modify geometry ( vertices ), Geometry shaders actually create new geometry.  Geometry shaders were added in OpenGL 3.2 and D3D10.  Then in OpenGL4/D3D11 Tessellation shaders were added.  Tessellation is the process of sub-dividing a surface to add more detail, moving this process to silicon makes it viable to create much lower detailed meshes and tessellate them on the fly.  So, why are we only talking about Fragment and Vertex shaders?  Portability.  Right now OpenGL ES and WebGL do not support any other shaders.  So if you want to support mobile or WebGL, you can’t use these other shader types.

 

SpriteBatch and default Shaders

 

As I said earlier, when you use SpriteBatch, it provides a default Vertex and Fragment shader for you.  Let’s take a look at each of them now.  Let’s do it in the order they occur, so let’s take a look at the vertex shader first:

 

attribute vec4 a_position;
attribute vec4 a_color;
attribute vec2 a_texCoord;

uniform mat4 u_projTrans;

varying vec4 v_color;
varying vec2 v_texCoords;

void main()
{
    v_color = a_color;
    v_color.a = v_color.a * (256.0/255.0);
    v_texCoords = a_texCoord + 0;
    gl_Position =  u_projTrans * a_position;
}

 

As I said, GLSL is a very C-like language, right down to including a main() function as the program entry point.  There are a few things to be aware of here.  First are attribute and uniform  variables.  These are variables that are passed in from your source code.  LibGDX takes care of most of these for you, but if you are going to write your own default shader, LibGDX expects all of them to exist.  So then, what is the difference between a uniform and attribute variable?  A uniform stays the same for every single vertex.  Attributes on the other hand can vary from vertex to vertex.  Obviously this can have performance implications, so if it makes sense, prefer using a uniform.  A varying value on the other hand can be thought of as the return value, these values will be passed on down the rendering pipeline ( meaning the fragment shader has access to them ).  As you can see from the use of gl_Position, OpenGL also has some built in values.  For vertex shaders there are gl_Position and gl_PointSize.  Think of these as uniform variables provided by OpenGL itself.  gl_Position is ultimately the position of your vertex in the world.

 

As to what this script does, it mostly just prepares a number of variables for the fragment shader, the color, the normalized ( 0 to 1 ) alpha value and the texture to bind to, in this case texture unit 0.  This is set by calling Texture.Bind() in your code, or is called by LibGDX for you.  Finally it positions the vertex in 3D space by multiplying the vertices position by the transformation you passed in as u_projTrans.

 

Now let’s take a quick look at the default fragment shader:

#ifdef GL_ES
#define LOWP lowp
    precision mediump float;
#else
    #define LOWP
#endif

varying LOWP vec4 v_color;
varying vec2 v_texCoords;

uniform sampler2D u_texture;

void main()
{
    gl_FragColor = v_color * texture2D(u_texture, v_texCoords);
}

 

As you can see, the format is very similar.  The ugly #ifdef allows this code to work on both mobile and higher end desktop machines.  Essentially if you are running OpenGL ES then the value of LOWP is defined as lowp, and precision is set to medium.  In real world terms, this means that GL ES will run at a lower level of precision for internal calculations, both speeding things up and slightly degrading the result. 

The values v_color and v_texCoords were provided by the vertex shader.  A sampler2D on the other hand is a special glsl datatype for accessing the texture bound to the shader.  gl_FragColor is another special built in variable ( like vertex shaders, fragment shaders have some GL provided variables, many more than Vertex shaders in fact ), this one represents the output color of the pixel the fragment shader is evaluating.  texture2D essentially returns a vec4 value representing the pixel at UV coordinate v_texCoords in texture u_texture.  The vec4 represents the RGBA values of the pixel, so for example (1.0,0.0,0.0,0.5) is a 50% transparent red pixel.  The value assigned to gl_FragColor is ultimately the color value of the pixel displayed on your screen.

 

Of course a full discussion on GLSL shaders is wayyy beyond the scope of this document.  Again if you need more information I suggest you start here.  I am also no expert on GLSL, so you are much better off learning the details from someone else! :)  This does however give you a peek behind the curtain at what LibGDX is doing each frame and is going to be important to us in just a moment.

 

Changing the Default Shader

 

There comes a time where you might want to alter the default shader and replace it with one of your own.  This process is actually quite simple, let’s take a look.  Let’s say for some reason you wanted to render your game entirely in black and white?  Here are a simple vertex and fragment shader combo that will do exactly this:

 

Vertex shader:

attribute vec4 a_position;
attribute vec4 a_color;
attribute vec2 a_texCoord0;

uniform mat4 u_projTrans;

varying vec4 v_color;
varying vec2 v_texCoords;

void main() {
    v_color = a_color;
    v_texCoords = a_texCoord0;
    gl_Position = u_projTrans * a_position;
}

Fragment shader:

#ifdef GL_ES
    precision mediump float;
#endif

varying vec4 v_color;
varying vec2 v_texCoords;
uniform sampler2D u_texture;
uniform mat4 u_projTrans;

void main() {
        vec3 color = texture2D(u_texture, v_texCoords).rgb;
        float gray = (color.r + color.g + color.b) / 3.0;
        vec3 grayscale = vec3(gray);

        gl_FragColor = vec4(grayscale, 1.0);
}

I saved each file as vertex.glsl and shader.glsl respectively, to the project assets directory.  The shaders are extremely straight forward.  The Vertex is in fact just the default vertex shader from LibGDX.  Once again remember you need to provide certain values for SpriteBatch to work… don’t worry, things will blow up and tell you if they are missing from your shader! :)  The fragment shader is simply sampling the RGB value of the current texture pixel, getting the “average” value of the RGB values and using that as the output value.

 

Enough with shader code, let’s take a look at the LibGDX code now:

package com.gamefromscratch;

import com.badlogic.gdx.ApplicationAdapter;
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.graphics.GL20;
import com.badlogic.gdx.graphics.Texture;
import com.badlogic.gdx.graphics.g2d.Sprite;
import com.badlogic.gdx.graphics.g2d.SpriteBatch;
import com.badlogic.gdx.graphics.glutils.ShaderProgram;

public class ShaderTestApp extends ApplicationAdapter {
    SpriteBatch batch;
    Texture img;
    Sprite sprite;
    String vertexShader;
    String fragmentShader;
    ShaderProgram shaderProgram;

    @Override
    public void create () {
        batch = new SpriteBatch();
        img = new Texture("badlogic.jpg");
        sprite = new Sprite(img);
        sprite.setSize(Gdx.graphics.getWidth(), Gdx.graphics.getHeight());

        vertexShader = Gdx.files.internal("vertex.glsl").readString();
        fragmentShader = Gdx.files.internal("fragment.glsl").readString();
        shaderProgram = new ShaderProgram(vertexShader,fragmentShader);
    }

    @Override
    public void render () {
        Gdx.gl.glClearColor(1, 0, 0, 1);
        Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
        batch.begin();
        batch.setShader(shaderProgram);
        batch.draw(sprite,sprite.getX(),sprite.getY(),sprite.getWidth(),sprite.getHeight());
        batch.end();
    }
}

 

And when you run it:

image

 

Tada, your output is grayscale!

As to what we are doing in that code, we load each shader file as a string.  When then create a new ShaderProgram passing in a vertex and fragment shader.  The ShaderProgram is the class the populates all the various variables that your shaders expect, bridging the divide between the Java world and the GLSL world.  Then in render() we set our ShaderProgram as active by calling setShader().  Truth is, we could have done this just once in the create method instead of once per frame.

 

Multiple Shaders per Frame

 

In the above example, when we set the shader program, it applied to all of the output.  That’s nice if you want to render the entire world in black and white, but what if you just wanted to render a single sprite using your shader?  Well fortunately that is pretty easy, you simply change the shader again.  Consider:

package com.gamefromscratch;

import com.badlogic.gdx.ApplicationAdapter;
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.graphics.GL20;
import com.badlogic.gdx.graphics.Texture;
import com.badlogic.gdx.graphics.g2d.Sprite;
import com.badlogic.gdx.graphics.g2d.SpriteBatch;
import com.badlogic.gdx.graphics.glutils.ShaderProgram;

public class ShaderTest2 extends ApplicationAdapter {
    SpriteBatch batch;
    Texture img;
    Sprite leftSprite;
    Sprite rightSprite;
    String vertexShader;
    String fragmentShader;
    ShaderProgram shaderProgram;

    @Override
    public void create () {
        batch = new SpriteBatch();
        img = new Texture("badlogic.jpg");
        leftSprite = new Sprite(img);
        rightSprite = new Sprite(img);

        leftSprite.setSize(Gdx.graphics.getWidth()/2, Gdx.graphics.getHeight());
        leftSprite.setPosition(0,0);
        rightSprite.setSize(Gdx.graphics.getWidth()/2, Gdx.graphics.getHeight());
        rightSprite.setPosition(Gdx.graphics.getWidth()/2,0);

        vertexShader = Gdx.files.internal("vertex.glsl").readString();
        fragmentShader = Gdx.files.internal("fragment.glsl").readString();
        shaderProgram = new ShaderProgram(vertexShader,fragmentShader);
    }

    @Override
    public void render () {
        Gdx.gl.glClearColor(1, 0, 0, 1);
        Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);

        batch.setShader(null);
        batch.begin();
        batch.draw(leftSprite, leftSprite.getX(), leftSprite.getY(), leftSprite.getWidth(), leftSprite.getHeight());
        batch.end();

        batch.setShader(shaderProgram);
        batch.begin();
        batch.draw(rightSprite, rightSprite.getX(), rightSprite.getY(), rightSprite.getWidth(), rightSprite.getHeight());
        batch.end();
    }
}

 

And when you run it:

image

 

One using the default shader, one sprite rendered using the black and white shader.  As you can see, it’s simply a matter of calling setShader() multiple times.  Calling setShader() but passing in null restores the default built-in shader.  However, each time you call setShader() there is a fair amount of setup done behind the scenes, so you want to minimize the number of times you want to call it.  Or…

 

Setting Shader on a Mesh Object

 

Each Mesh object in LibGDX has it’s own ShaderProgram.  Behind the scenes SpriteBatch is actually creating a large single Mesh out of all the sprites in your screen, which are ultimately just textured quads.  So if you have a game object that needs fine tune shader control, you may consider rolling your own Mesh object.  Let’s take a look at such an example:

package com.gamefromscratch;

import com.badlogic.gdx.ApplicationAdapter;
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.graphics.*;
import com.badlogic.gdx.graphics.g2d.Sprite;
import com.badlogic.gdx.graphics.g2d.SpriteBatch;
import com.badlogic.gdx.graphics.glutils.ShaderProgram;

public class MeshShaderApp extends ApplicationAdapter {
    SpriteBatch batch;
    Texture texture;
    Sprite sprite;
    Mesh mesh;
    ShaderProgram shaderProgram;

    @Override
    public void create () {
        batch = new SpriteBatch();
        texture = new Texture("badlogic.jpg");
        sprite = new Sprite(texture);
        sprite.setSize(Gdx.graphics.getWidth(),Gdx.graphics.getHeight());

        float[] verts = new float[30];
        int i = 0;
        float x,y; // Mesh location in the world
        float width,height; // Mesh width and height

        x = y = 50f;
        width = height = 300f;

        //Top Left Vertex Triangle 1
        verts[i++] = x;   //X
        verts[i++] = y + height; //Y
        verts[i++] = 0;    //Z
        verts[i++] = 0f;   //U
        verts[i++] = 0f;   //V

        //Top Right Vertex Triangle 1
        verts[i++] = x + width;
        verts[i++] = y + height;
        verts[i++] = 0;
        verts[i++] = 1f;
        verts[i++] = 0f;

        //Bottom Left Vertex Triangle 1
        verts[i++] = x;
        verts[i++] = y;
        verts[i++] = 0;
        verts[i++] = 0f;
        verts[i++] = 1f;

        //Top Right Vertex Triangle 2
        verts[i++] = x + width;
        verts[i++] = y + height;
        verts[i++] = 0;
        verts[i++] = 1f;
        verts[i++] = 0f;

        //Bottom Right Vertex Triangle 2
        verts[i++] = x + width;
        verts[i++] = y;
        verts[i++] = 0;
        verts[i++] = 1f;
        verts[i++] = 1f;

        //Bottom Left Vertex Triangle 2
        verts[i++] = x;
        verts[i++] = y;
        verts[i++] = 0;
        verts[i++] = 0f;
        verts[i] = 1f;

        // Create a mesh out of two triangles rendered clockwise without indices
        mesh = new Mesh( true, 6, 0,
                new VertexAttribute( VertexAttributes.Usage.Position, 3, ShaderProgram.POSITION_ATTRIBUTE ),
                new VertexAttribute( VertexAttributes.Usage.TextureCoordinates, 2, ShaderProgram.TEXCOORD_ATTRIBUTE+"0" ) );

        mesh.setVertices(verts);

        shaderProgram = new ShaderProgram(
                Gdx.files.internal("vertex.glsl").readString(),
                Gdx.files.internal("fragment.glsl").readString()
                );
    }

    @Override
    public void render () {

        Gdx.gl20.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
        Gdx.gl20.glClearColor(0.2f, 0.2f, 0.2f, 1);
        Gdx.gl20.glClear(GL20.GL_COLOR_BUFFER_BIT);
        Gdx.gl20.glEnable(GL20.GL_TEXTURE_2D);
        Gdx.gl20.glEnable(GL20.GL_BLEND);
        Gdx.gl20.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA);

        batch.begin();
        sprite.draw(batch);
        batch.end();

        texture.bind();
        shaderProgram.begin();
        shaderProgram.setUniformMatrix("u_projTrans", batch.getProjectionMatrix());
        shaderProgram.setUniformi("u_texture", 0);
        mesh.render(shaderProgram, GL20.GL_TRIANGLES);
        shaderProgram.end();
    }
}

And when you run it:

 

image

This sample is long but fairly simple.  In create() we create the geometry for a quad by defining 2 triangles.  We then load our ShaderProgram just like we did in the earlier example.  You may notice in creating the Mesh we define two VertexAttribute values and bind them to values within our ShaderProgram.  These are the input values into the shader.  Unlike with SpriteBatch and the default shader, you need to do a bit more of the behind the scenes work when rolling your own Mesh.

 

Then in render() you see we work with the SpriteBatch normally but then draw our Mesh object using Mesh.render, passing in the ShaderProgram.  Texture.bind() is what binds the texture from LibGDX to texture unit 0 in the GLSL shader.  We then pass in our required uniform values using setUniformMatrix and setUniformi ( as in int ).  This is how you set up uniform values from the Java side of the fence.  u_texture is saying which texture unit to use, while u_projTrans is the transformation matrix for positioning items within our world.  In this case we are simply using the projection matrix from the SpriteBatch.

 

Using a Mesh instead of a Sprite has some disadvantages however.  When working with Sprites, all geometry is batched into a single object and this is good for performance.  More importantly, with Mesh you need to roll all the functionality you need from Sprite as you need it.  For example, if you want to support scaling or rotation, you need to provide that functionality.

Programming


GFS On YouTube

See More Tutorials on DevGa.me!

Month List