25. July 2014

Now we are going to look at using Physics in our Spritekit based game.  If you’ve worked with a physics engine before, a lot of this is going to feel very familiar.  One of the key differences from many other engines is SpriteKit handles updating the graphics as well as the physics.  It’s a fairly involved process, so I am going to split this across multiple posts.

The first one is going to be short and sweet.  We are going to configure the physics of a sphere, a SKShapeNode.  It is simply going to exist and let gravity take it’s course.  Code time:

import SpriteKit

class GameScene: SKScene {

override func didMoveToView(view: SKView) {

view.scene!.anchorPoint = CGPoint(x: 0.5,y: 0.5)

var shapeNode = SKShapeNode();

var path = CGPathCreateMutable();

CGPathAddArc(path, nil, 0, view.bounds.height/2, 45, 0, M_PI*2, true);

shapeNode.path = path;

shapeNode.lineWidth = 2.0;

shapeNode.physicsBody.dynamic = true;

// Make gravity "fall" at 1 unit per second along the y-axis

self.physicsWorld.gravity.dy = -1;

}

}

And once run:

Well, that was simple enough, let’s take a quick run through the code.

We start off by creating an SKShapeNode.  This shape node is defined by a CGPath.  You can think of a CGPath as a set of drawing instructions.  In this case it consists of a single arc that draws a circle.    Once our path is created we set it to be our shapeNodes path.  We set the lineWidth to 2 to make it a bit more visible.

Next we define the physicsBody.  Every SKNode has a SKPhysicsBody.  The SKPhysicsBody is the nodes’ representation in the physics engine.  When defining a physics body you pick the shape that most matches your underlying shape.  In this case it’s a no brainer to use a circle.  There are constructors available for rectangles, polygons, edges, etc.  Of all the options however, the circle is the least processing intensive.  So if you need just a rough physical representation, prefer the more simple types ( circle, then rectangle ) and leave the more complex types until required.  The key thing here is the dynamic property.  This is what tells SpriteKit to include your SKNode in the physics calculation.  If this isn’t set, nothing is going to happen!

Finally we set the gravity value dy to -1.  This means gravity moves by a delta of -1 unit per second on the y axis.  Notice the concept of “unit” here, it is very import.  When dealing with SpriteKit ( and many other Physics engines ), a unit is completely arbitrary.  So you may ask “1 what?”.  The answer is, 1 whatever… just be consistent about it.  In your code you could choose 1 to be a millimetre, an inch, a foot, a lightyear.  A lot of it comes down to the type of game you are working on.  One thing to keep in mind here though, massive ranges in value are not a good thing…  so don’t mix units.  For example, trying to represent something in millimetres and kilometres in the same simulation is a sure recipe for disaster!

Now let’s give our ball something to collide with… that being the edge of the window.  While we are at it, let’s add a bit of bounce to our step:

import SpriteKit

class GameScene: SKScene {

override func didMoveToView(view: SKView) {

//Create the ball

var shapeNode = SKShapeNode();

var path = CGPathCreateMutable();

CGPathAddArc(path, nil, 0, 0, 45, 0, M_PI*2, true);

CGPathCloseSubpath(path);

shapeNode.path = path;

shapeNode.lineWidth = 2.0;

shapeNode.position = CGPoint(x:self.view.frame.width/2,y:self.view.frame.height);

// Set the ball's physical properties

shapeNode.physicsBody.dynamic = true;

shapeNode.physicsBody.mass = 1;

shapeNode.physicsBody.friction = 0.2;

shapeNode.physicsBody.restitution = 1;

// Now make the edges of the screen a physics object as well

scene.physicsBody = SKPhysicsBody(edgeLoopFromRect: view.frame);

// Make gravity "fall" at 1 unit per second along the y-axis

self.physicsWorld.gravity.dy = -1;

}

}

And run this:

Not, the jerkiness you see isn’t from the physics, but instead the animated gif.  Getting that to encode to a reasonable size was oddly a bit of a battle.

The key difference here is first of all, we set the edges of the screen as a physics object for our ball to collide against.  A key thing to remember with SpriteKit, every thing is a SKNode, even the scene itself!

Next we set a couple key physics properties for our ball, mass, friction and restitution.  Mass is the weight of the object… going back to the age old question, what falls faster… a ton of feathers, or a ton of bricks?  Friction is how two surfaces react to each other, generally used when simulating “sliding” and is fairly irrelevant in this example.  Restitution is the key value.  For lack of a better explanation, restitution is the bounciness of the objet.  Lower value, less bounce, higher value, higher bounce.  A value higher than 1 will result in an object actually gaining momentum after a collision ( aka, going higher UP then it fell from before bouncing ).

Next up we will work on actual collisions.

8. July 2014

In this part of the LibGDX tutorial series we are going to take a look at using GLSL shaders.  GLSL standards for OpenGL Shader Language and since the move from a fixed to programmable graphics pipeline, Shader programming has become incredibly important.  In fact, every single thing rendered with OpenGL has at least a pair of shaders attached to it.  It’s been pretty transparent to you till this point because LibGDX mostly takes care of everything for you.  When you create a SpriteBatch object in LibGDX, it automatically creates a default vertex and fragment shader for you.  If you want more information on working with GLSL I put together the OpenGL Shader Programming Resource Round-up back in May.  It has all the information you should need to get up to speed with GLSL.  For more information on OpenGL in general, I also created this guide.

### Render Pipeline Overview

To better understand the role of GL shaders, it’s good to have a basic understanding of how the modern graphics pipeline works.  This is the high level description I gave in PlayStation Mobile book, it’s not plagiarism because I’m the author. :)

A top-level view of how rendering occurs might help you understand the shader process. It all starts with the shader program, vertex buffers, texture coordinates, and so on being passed in to the graphics device. Then this information is sent off to a vertex shader, which can then transform that vertex, do lighting calculations and more (we will see this process shortly). The vertex shader is executed once for every vertex and a number of different values can be output from this process (these are the out attributes we saw in the shader earlier). Next the results are transformed, culled, and clipped to the screen, discarding anything that is not visible, then rasterized, which is the process of converting from vector graphics to pixel graphics, something that can be drawn to the screen.

The results of this process are fragments, which you can think of as "prospective pixels," and the fragment are passed in to the fragment shader. This is why they are called fragment shaders instead of pixel shaders, although people commonly refer to them using either expression. Once again, the fragment shader is executed once for each fragment. A fragment shader, unlike a vertex shader, can only return a single attribute, which is the RGBA color of the individual pixel. In the end, this is the value that will be displayed on the screen. It sounds like a horribly complex process, but the GPUs have dedicated hardware for performing exactly such operations, millions upon millions of times per second. That description also glossed over about a million tiny details, but that is the gist of how the process occurs.

As I said earlier, when you use SpriteBatch, it provides a default Vertex and Fragment shader for you.  Let’s take a look at each of them now.  Let’s do it in the order they occur, so let’s take a look at the vertex shader first:

```attribute vec4 a_position;
attribute vec4 a_color;
attribute vec2 a_texCoord;

uniform mat4 u_projTrans;

varying vec4 v_color;
varying vec2 v_texCoords;

void main()
{
v_color = a_color;
v_color.a = v_color.a * (256.0/255.0);
v_texCoords = a_texCoord + 0;
gl_Position =  u_projTrans * a_position;
}```

As I said, GLSL is a very C-like language, right down to including a main() function as the program entry point.  There are a few things to be aware of here.  First are attribute and uniform  variables.  These are variables that are passed in from your source code.  LibGDX takes care of most of these for you, but if you are going to write your own default shader, LibGDX expects all of them to exist.  So then, what is the difference between a uniform and attribute variable?  A uniform stays the same for every single vertex.  Attributes on the other hand can vary from vertex to vertex.  Obviously this can have performance implications, so if it makes sense, prefer using a uniform.  A varying value on the other hand can be thought of as the return value, these values will be passed on down the rendering pipeline ( meaning the fragment shader has access to them ).  As you can see from the use of gl_Position, OpenGL also has some built in values.  For vertex shaders there are gl_Position and gl_PointSize.  Think of these as uniform variables provided by OpenGL itself.  gl_Position is ultimately the position of your vertex in the world.

As to what this script does, it mostly just prepares a number of variables for the fragment shader, the color, the normalized ( 0 to 1 ) alpha value and the texture to bind to, in this case texture unit 0.  This is set by calling Texture.Bind() in your code, or is called by LibGDX for you.  Finally it positions the vertex in 3D space by multiplying the vertices position by the transformation you passed in as u_projTrans.

Now let’s take a quick look at the default fragment shader:

```#ifdef GL_ES
#define LOWP lowp
precision mediump float;
#else
#define LOWP
#endif

varying LOWP vec4 v_color;
varying vec2 v_texCoords;

uniform sampler2D u_texture;

void main()
{
gl_FragColor = v_color * texture2D(u_texture, v_texCoords);
}```

As you can see, the format is very similar.  The ugly #ifdef allows this code to work on both mobile and higher end desktop machines.  Essentially if you are running OpenGL ES then the value of LOWP is defined as lowp, and precision is set to medium.  In real world terms, this means that GL ES will run at a lower level of precision for internal calculations, both speeding things up and slightly degrading the result.

The values v_color and v_texCoords were provided by the vertex shader.  A sampler2D on the other hand is a special glsl datatype for accessing the texture bound to the shader.  gl_FragColor is another special built in variable ( like vertex shaders, fragment shaders have some GL provided variables, many more than Vertex shaders in fact ), this one represents the output color of the pixel the fragment shader is evaluating.  texture2D essentially returns a vec4 value representing the pixel at UV coordinate v_texCoords in texture u_texture.  The vec4 represents the RGBA values of the pixel, so for example (1.0,0.0,0.0,0.5) is a 50% transparent red pixel.  The value assigned to gl_FragColor is ultimately the color value of the pixel displayed on your screen.

Of course a full discussion on GLSL shaders is wayyy beyond the scope of this document.  Again if you need more information I suggest you start here.  I am also no expert on GLSL, so you are much better off learning the details from someone else! :)  This does however give you a peek behind the curtain at what LibGDX is doing each frame and is going to be important to us in just a moment.

There comes a time where you might want to alter the default shader and replace it with one of your own.  This process is actually quite simple, let’s take a look.  Let’s say for some reason you wanted to render your game entirely in black and white?  Here are a simple vertex and fragment shader combo that will do exactly this:

```attribute vec4 a_position;
attribute vec4 a_color;
attribute vec2 a_texCoord0;

uniform mat4 u_projTrans;

varying vec4 v_color;
varying vec2 v_texCoords;

void main() {
v_color = a_color;
v_texCoords = a_texCoord0;
gl_Position = u_projTrans * a_position;
}```

```#ifdef GL_ES
precision mediump float;
#endif

varying vec4 v_color;
varying vec2 v_texCoords;
uniform sampler2D u_texture;
uniform mat4 u_projTrans;

void main() {
vec3 color = texture2D(u_texture, v_texCoords).rgb;
float gray = (color.r + color.g + color.b) / 3.0;
vec3 grayscale = vec3(gray);

gl_FragColor = vec4(grayscale, 1.0);
}```

I saved each file as vertex.glsl and shader.glsl respectively, to the project assets directory.  The shaders are extremely straight forward.  The Vertex is in fact just the default vertex shader from LibGDX.  Once again remember you need to provide certain values for SpriteBatch to work… don’t worry, things will blow up and tell you if they are missing from your shader! :)  The fragment shader is simply sampling the RGB value of the current texture pixel, getting the “average” value of the RGB values and using that as the output value.

Enough with shader code, let’s take a look at the LibGDX code now:

```package com.gamefromscratch;

SpriteBatch batch;
Texture img;
Sprite sprite;

@Override
public void create () {
batch = new SpriteBatch();
sprite = new Sprite(img);
sprite.setSize(Gdx.graphics.getWidth(), Gdx.graphics.getHeight());

}

@Override
public void render () {
Gdx.gl.glClearColor(1, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.begin();
batch.draw(sprite,sprite.getX(),sprite.getY(),sprite.getWidth(),sprite.getHeight());
batch.end();
}
}```

And when you run it:

As to what we are doing in that code, we load each shader file as a string.  When then create a new ShaderProgram passing in a vertex and fragment shader.  The ShaderProgram is the class the populates all the various variables that your shaders expect, bridging the divide between the Java world and the GLSL world.  Then in render() we set our ShaderProgram as active by calling setShader().  Truth is, we could have done this just once in the create method instead of once per frame.

In the above example, when we set the shader program, it applied to all of the output.  That’s nice if you want to render the entire world in black and white, but what if you just wanted to render a single sprite using your shader?  Well fortunately that is pretty easy, you simply change the shader again.  Consider:

```package com.gamefromscratch;

SpriteBatch batch;
Texture img;
Sprite leftSprite;
Sprite rightSprite;

@Override
public void create () {
batch = new SpriteBatch();
leftSprite = new Sprite(img);
rightSprite = new Sprite(img);

leftSprite.setSize(Gdx.graphics.getWidth()/2, Gdx.graphics.getHeight());
leftSprite.setPosition(0,0);
rightSprite.setSize(Gdx.graphics.getWidth()/2, Gdx.graphics.getHeight());
rightSprite.setPosition(Gdx.graphics.getWidth()/2,0);

}

@Override
public void render () {
Gdx.gl.glClearColor(1, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);

batch.begin();
batch.draw(leftSprite, leftSprite.getX(), leftSprite.getY(), leftSprite.getWidth(), leftSprite.getHeight());
batch.end();

batch.begin();
batch.draw(rightSprite, rightSprite.getX(), rightSprite.getY(), rightSprite.getWidth(), rightSprite.getHeight());
batch.end();
}
}```

And when you run it:

One using the default shader, one sprite rendered using the black and white shader.  As you can see, it’s simply a matter of calling setShader() multiple times.  Calling setShader() but passing in null restores the default built-in shader.  However, each time you call setShader() there is a fair amount of setup done behind the scenes, so you want to minimize the number of times you want to call it.  Or…

### Setting Shader on a Mesh Object

Each Mesh object in LibGDX has it’s own ShaderProgram.  Behind the scenes SpriteBatch is actually creating a large single Mesh out of all the sprites in your screen, which are ultimately just textured quads.  So if you have a game object that needs fine tune shader control, you may consider rolling your own Mesh object.  Let’s take a look at such an example:

```package com.gamefromscratch;

SpriteBatch batch;
Texture texture;
Sprite sprite;
Mesh mesh;

@Override
public void create () {
batch = new SpriteBatch();
sprite = new Sprite(texture);
sprite.setSize(Gdx.graphics.getWidth(),Gdx.graphics.getHeight());

float[] verts = new float[30];
int i = 0;
float x,y; // Mesh location in the world
float width,height; // Mesh width and height

x = y = 50f;
width = height = 300f;

//Top Left Vertex Triangle 1
verts[i++] = x;   //X
verts[i++] = y + height; //Y
verts[i++] = 0;    //Z
verts[i++] = 0f;   //U
verts[i++] = 0f;   //V

//Top Right Vertex Triangle 1
verts[i++] = x + width;
verts[i++] = y + height;
verts[i++] = 0;
verts[i++] = 1f;
verts[i++] = 0f;

//Bottom Left Vertex Triangle 1
verts[i++] = x;
verts[i++] = y;
verts[i++] = 0;
verts[i++] = 0f;
verts[i++] = 1f;

//Top Right Vertex Triangle 2
verts[i++] = x + width;
verts[i++] = y + height;
verts[i++] = 0;
verts[i++] = 1f;
verts[i++] = 0f;

//Bottom Right Vertex Triangle 2
verts[i++] = x + width;
verts[i++] = y;
verts[i++] = 0;
verts[i++] = 1f;
verts[i++] = 1f;

//Bottom Left Vertex Triangle 2
verts[i++] = x;
verts[i++] = y;
verts[i++] = 0;
verts[i++] = 0f;
verts[i] = 1f;

// Create a mesh out of two triangles rendered clockwise without indices
mesh = new Mesh( true, 6, 0,
new VertexAttribute( VertexAttributes.Usage.Position, 3, ShaderProgram.POSITION_ATTRIBUTE ),
new VertexAttribute( VertexAttributes.Usage.TextureCoordinates, 2, ShaderProgram.TEXCOORD_ATTRIBUTE+"0" ) );

mesh.setVertices(verts);

);
}

@Override
public void render () {

Gdx.gl20.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
Gdx.gl20.glClearColor(0.2f, 0.2f, 0.2f, 1);
Gdx.gl20.glClear(GL20.GL_COLOR_BUFFER_BIT);
Gdx.gl20.glEnable(GL20.GL_TEXTURE_2D);
Gdx.gl20.glEnable(GL20.GL_BLEND);
Gdx.gl20.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA);

batch.begin();
sprite.draw(batch);
batch.end();

texture.bind();
}
}```

And when you run it:

This sample is long but fairly simple.  In create() we create the geometry for a quad by defining 2 triangles.  We then load our ShaderProgram just like we did in the earlier example.  You may notice in creating the Mesh we define two VertexAttribute values and bind them to values within our ShaderProgram.  These are the input values into the shader.  Unlike with SpriteBatch and the default shader, you need to do a bit more of the behind the scenes work when rolling your own Mesh.

Then in render() you see we work with the SpriteBatch normally but then draw our Mesh object using Mesh.render, passing in the ShaderProgram.  Texture.bind() is what binds the texture from LibGDX to texture unit 0 in the GLSL shader.  We then pass in our required uniform values using setUniformMatrix and setUniformi ( as in int ).  This is how you set up uniform values from the Java side of the fence.  u_texture is saying which texture unit to use, while u_projTrans is the transformation matrix for positioning items within our world.  In this case we are simply using the projection matrix from the SpriteBatch.

Using a Mesh instead of a Sprite has some disadvantages however.  When working with Sprites, all geometry is batched into a single object and this is good for performance.  More importantly, with Mesh you need to roll all the functionality you need from Sprite as you need it.  For example, if you want to support scaling or rotation, you need to provide that functionality.

24. June 2014

In the previous part we looked at handling graphics in Phaser, now we are going to look at handling input.  This part is going to be code heavy and fairly light on description.  Look to the code comments for more details.

As is pretty common with game frameworks, there are a number of different ways to handle input and a number of different devices, so lets get started!

### Using the cursor keys and polling for input

```/// <reference path="phaser.d.ts"/>

// Demonstrate the use of arrow keys in a Phaser app
// This application demonstrates creation of a Cursor and polling for input
class SimpleGame {
game: Phaser.Game;
jetSprite: Phaser.Sprite;
cursors: Phaser.CursorKeys;

constructor() {
this.game = new Phaser.Game(640, 480, Phaser.AUTO, 'content', {
update: this.update});
}

}

create() {
var image = <Phaser.Image>this.game.cache.getImage("jet");
this.game.width / 2 - image.width / 2,
this.game.height / 2 - image.height / 2,
"jet");

// create the cursor key object
this.cursors = this.game.input.keyboard.createCursorKeys();
}

update() {
// Update input state
this.game.input.update();

// Check each of the arrow keys and move accordingly
// If the Ctrl Key + Left or Right arrow are pressed, move at a greater rate
if (this.cursors.down.isDown)
this.jetSprite.position.y++;
if (this.cursors.up.isDown)
this.jetSprite.position.y--;
if (this.cursors.left.isDown) {
if (this.cursors.left.ctrlKey)
this.jetSprite.position.x -= 5;
else
this.jetSprite.position.x--;
}
if (this.cursors.right.isDown) {
if (this.cursors.right.ctrlKey)
this.jetSprite.position.x += 5;
else
this.jetSprite.position.x++;
}
}
}

var game = new SimpleGame();
};
```

When you run this code the familiar jet sprite is rendered centered to the canvas. You can then use the arrow keys to move the fighter around.  As you can see, in the state for each key is information on modifier keys like Control and Alt.  Polling for input ( that is, checking status each call to update ) is a valid way of controlling a game, but sometimes you instead want to respond to input as it arrives.  Let’s look now at an example of event driven keyboard handling:

```/// <reference path="phaser.d.ts"/>

// Demonstrate keyboard input handling via callback
class SimpleGame {
game: Phaser.Game;
jetSprite: Phaser.Sprite;
W: Phaser.Key;
A: Phaser.Key;
S: Phaser.Key;
D: Phaser.Key;

constructor() {
this.game = new Phaser.Game(640, 480, Phaser.AUTO, 'content', {
});
}

}

moveLeft() {
}
moveRight() {
}
moveUp(e: KeyboardEvent) {
// As you can see the event handler is passed an optional event KeyboardEvent
// key status.
// Basically if the control key is held, we move up or down by 5 instead of 1
if (e.ctrlKey)
else
}
moveDown(e: KeyboardEvent) {
if (e.ctrlKey)
else
}

create() {
var image = <Phaser.Image>this.game.cache.getImage("jet");
this.game.width / 2 - image.width / 2,
this.game.height / 2 - image.height / 2,
"jet");

// Create a key for each WASD key

// Since we are allowing the combination of CTRL+W, which is a shortcut for close window
// we need to trap all handling of the W key and make sure it doesnt get handled by
// the browser.
// Unfortunately you can no longer capture the CTRL+W key combination in Google Chrome
// except in "Application Mode" because apparently Google thought an unstoppable un prompted
// key combo of death was a good idea...

// Wire up an event handler for each K.  The handler is a Phaser.Signal attached to the Key Object
}
}

var game = new SimpleGame();
};
```

As you can see, you can also create Phaser.Key objects and attach onDown event handlers ( technically Signals ) to each.  Of course you can reuse the same handler for multiple keys.  A couple key things to notice here… unlike the previous example, holding down a key will not cause continuous movement.  You must press and release the key over and over.  If you want constant movement, either use a polling method, use and action instead of updating each frame, or add some logic to move until the key is released.

The other thing to be aware of here is the use of the CTRL+W combination and addKeyCapture().  addKeyCapture() allows you to prevent the event from bubbling up, so once you’ve handled the key combination, it’s done.  Otherwise it would keep being passed up, either to other objects in the scene, or to the browser itself.  You can also use addKeyCapture to prevent default web behavior, such as scrolling when SPACE is pressed.

16. June 2014

In the previous tutorial part we looked at working with a single Sprite.  Reality is, very few games are composed of singular sprites.  UI’s are made up of a number of different sprites, animations are composed of several different frames, each composed of a single image.  Loading hundreds of individual textures is not good for performance.  A very common solution is to use a texture atlas ( or sprite sheet ).  Fortunately Xcode make it extremely easy.

We are going to use the same sprite we did in the previous tutorial, you can download it here.  As you can see, it’s actually a zip file containing a number of png images:

Extract the zip file and rename it jet.atlas.

Now in Xcode, in Project Navigator, right click your project and select Add to Project

Select the directory ( not the files ) and click add.  Defaults should be ok, but make sure it’s set to copy.

And you are done.  The following code:

import SpriteKit

class GameScene: SKScene {

override func didMoveToView(view: SKView) {

var sprite = SKSpriteNode(imageNamed:"sprite4")

sprite.xScale = 4

sprite.yScale = 4

sprite.position = CGPoint(x:0,y:0)

view.scene!.anchorPoint = CGPoint(x: 0.5,y: 0.5)

}

}

Will load the sprite from the Atlas that had the file name “sprite4.png”.  NOTE HOWEVER, if you add an image named sprite4 to your project, it will be loaded instead of the Texture Atlas.

So, what exactly did Xcode do?  We behind the scenes, it went ahead an combined all of your images together into a single OpenGL optimized image, and created a reference file telling SpriteKit how to access it.  Let’s take a look at the results.

First, build your game.  Select Product->Build For->Running

You should now have a Products folder in your project.  Right click the .app file for your project and select Show in Finder:

Now right click the .app file and select Show Package Contents:

Navigate into contents->resources->jet.atlasc and you will see two files, one is an image, the other a plist.  Let’s look at the image first:

That’s out images smash together in a power of 2 friendly texture that your GPU can handle efficiently.  The plist file:

This plist shows SpriteKit how to access each individual image within the larger image.  To you however the process is pretty transparent.  Basically group all common images together in a directory.  Give the directory a .atlas name and add it to your project, then access each image as you would normally.

Sometimes however you may want to access the TextureAtlas itself.  You can do that as well, let’s take a look at how:

import SpriteKit

class GameScene: SKScene {

let textureAtlas = SKTextureAtlas(named:"jet.atlas")

var currentTexture:Int = 1;

override func didMoveToView(view: SKView) {

view.scene!.anchorPoint = CGPoint(x: 0.5,y: 0.5)

let sprite=SKSpriteNode(texture:textureAtlas.textureNamed("sprite1"))

sprite.xScale = 8

sprite.yScale = 8

}

override func keyDown(theEvent: NSEvent!) {

// On any key press move to the next texture

var sprite = self.scene.children[0] asSKSpriteNode

switch currentTexture {

case 1:

sprite.texture = textureAtlas.textureNamed("sprite2")

case 2:

sprite.texture = textureAtlas.textureNamed("sprite3")

case 3:

sprite.texture = textureAtlas.textureNamed("sprite1")

default:

break

}

++currentTexture

if(currentTexture > 3) {

currentTexture = 1

}

}

}

Now run it, and each time you press a key it will advance to the next frame.

First let me stop right here and make one thing perfectly clear… THIS IS NOT HOW YOU DO ANIMATION! :)  We will cover animation later. This was just a small code segment to illustrate how to access a TextureAtlas directly.

As you can see, it’s as simple a matter as creating an SKTextureAtlas and passing in the file name.  You can then access each individual SKTexture within the atlas using textureNamed passing in the file name you used for the original image in the atlas directory.  As you can see, you do not need to pass in the file extension.  Here we see a Swift switch statement in action.  Switch statements are important to Swift and quite capable.  You can switch on Ints like we have done here, but you can also use strings. It is tempting to use the sprite name here, but an important thing to realize is the SKSprite name is NOT the same as the SKTexture name.  Unless you manually name the sprite, it will be nil.  Unlike C++, case statements in Swift do not fall through, so you do not need to provide a break statement for each case.  However, there are two caveats to be aware of.  First, every single possible value must be handled.  If you don’t want to handle every possible value you can provide a default handler which will catch everything else.  However each case ( even default ) must contain at least one executable statement, thus the break in default.  This only scratches the surface of switch… you can also specify multiple values in a case using commas, provide ranges using the .. and … operators, etc.

That’s it for TextureAtlases, on to the next part!

13. June 2014

As you can imagine by the name “SpriteKit”, Sprites are a pretty core part of creating a game using SpriteKit.  We are going to continue building on the minimal application we created in the previous part.  I want to point out, this isn’t the recommended way of working with SpriteKit, it is instead the simplest way.  In a proper application we would be more data driven and store our data in SKS files instead of simply adding them to the project.  This is something we will cover later on.  First lets jump right in with code.

We are going to replace the the class GameScene we created in the last tutorial.  In SpriteKit, the fundamentals of your game are organized into SKScene objects.  For now we only have one.  Let’s look:

import SpriteKit

class GameScene: SKScene {

let sprite = SKSpriteNode(imageNamed: "sprite1.png")

override func didMoveToView(view: SKView) {

sprite.anchorPoint = CGPoint(x:0.5,y:0.5)

sprite.xScale = 4

sprite.yScale = 4

}

override func mouseDown(theEvent: NSEvent!) {

self.sprite.position = CGPoint(x:theEvent.locationInWindow.x,y:theEvent.locationInWindow.y)

}

}

We add the sprite “sprite1.png” to our project directory, simply drag and drop it from Finder.  The sprite(s) ill be using are from the zip file available here.  When you run this code, click anywhere and you should see:

Where ever you click the mouse, the sprite will be drawn.

One immediate change you will notice in this code is sprite was moved out of didMoveToView and made a member variable.  This allows us to access sprite in different functions ( although we could retrieve the sprite from the Scene, something we will see later ). In Swift there are only two main ways of declaring a variable, let and var.  var is a variable meaning it’s value can change.  Using let on the other hand you are declaring a the the value cannot change, this is the same as a const in other languages.  As we touched on briefly in the last part, a let declared value can be assigned later using the ? postfix operator.  In this case, it will have the value of nil at initialization, unless one is specifically given like in the code we just did.  One thing you may notice is, unlike C++, C# and Java, Swift currently has no access modifiers.  In other words all variables are publicly available ( there are no private, internal, protected, etc modifiers available ).  Apparently this is only temporary and will be changed in the language later.  This personally seems like a very odd thing not to have in a language from day one.

Since we set the sprite’s anchor to the middle (0.5,0.5), the sprite will be centred to your mouse cursor.  As you can see we added a mouseDown event handler.  This class is available because SKScene inherits UIResponder, this is how you handle I/O events in your scene.  The only other new aspect to this code is:

sprite.xScale = 4

sprite.yScale = 4

This code causes the sprite to be scaled by a factor of 4x.  We do this simply because our source sprite was only 64x64 pixels, making it really really tiny in an empty scene!  As you can see, scaling sprites in SpriteKit is extremely easy.

The structure of a SpriteKit game is actually quite simple.  Your SKScene contains a graph of SKNodes, of which SKSpriteNode is one.  There are others too including SKVideoNode, SKLabelNode, SKShapeNode, SKEmitterNode and SKEffectNode.  Even SKScene itself is a SKNode, which is how all the magic happens.  Let’s take a quick look at an SKLabelNode in action.

import SpriteKit

class GameScene: SKScene {

override func didMoveToView(view: SKView) {

var label = SKLabelNode();

label.text = "Hello World"

label.fontSize = 128

label.position = CGPoint(x:0,y:0)

view.scene!.anchorPoint = CGPoint(x: 0.5,y: 0.5)

}

}

Which predictably enough gives you:

These nodes however can be parented to make hierarchies of nodes.  Take for example a combination of the two we’ve seen, our sprite node with a text label parented to it.

import SpriteKit

class GameScene: SKScene {

override func didMoveToView(view: SKView) {

view.scene!.anchorPoint = CGPoint(x: 0.5,y: 0.5)

var sprite = SKSpriteNode(imageNamed:"sprite1.png")

sprite.position = CGPoint(x:100,y:0);

sprite.xScale = 4.0

sprite.yScale = 4.0

var label = SKLabelNode();

label.text = "Jet Sprite"

label.fontSize = 12

label.position = CGPoint(x:0,y: 15)

label.fontColor = NSColor.redColor()

label.alpha = 0.5

}

}

And when you run it:

There are a few things to notice here.  Each Node get’s its default coordinates from it’s parents.  Since the jet sprite is parented to the scene and the scene’s anchor is set to the middle of the screen, when we position the screen 100 pixels to the right, that’s 100 pixels to the right of the centre of the screen.  Additionally, the text label is positioned relative to the sprite, so it’s positioning is relative to the sprite.  Another thing you might notice is the text is blurry as hell.  That is because the label is inheriting the scaling from it’s parent, the sprite.  As you can see you compose your scene by creating a hierarchy of various types of nodes.  Now if we were to transform the parent sprite, all the transformations will apply to the child nodes.

The following example shows how transforming a parent node effects all child nodes.  Spoilers, it also shows you how to Update a Scene… we will cover this in more detail later, so don’t pay too much attention to the man behind the curtain.

import SpriteKit

class GameScene: SKScene {

var sprite = SKSpriteNode(imageNamed:"sprite1.png")

override func didMoveToView(view: SKView) {

view.scene!.anchorPoint = CGPoint(x: 0.5,y: 0.5)

sprite.position = CGPoint(x:0,y:0);

sprite.xScale = 8.0

sprite.yScale = 8.0

var label = SKLabelNode();

label.text = "Jet Sprite"

label.fontSize = 12

label.position = CGPoint(x:0,y: 15)

label.fontColor = NSColor.redColor()

label.alpha = 0.5

}

override func update(currentTime: NSTimeInterval) {

if(sprite.yScale > 0) {

sprite.yScale -= 0.1

sprite.xScale -= 0.1

}

else {

sprite.xScale = 8.0

sprite.yScale = 8.0

}

}

}

Now if we run this code:

Each time update() is called, the sprite is reduced in scaling until it disappears, at which point it’s zoomed back to 8x scaling.  As you can see, the child labelNode is scaled as well automatically.

Notice how until this point if we wanted to access our sprite across functions we had to make it a member variable?  As I said earlier, there is another option here, you name your nodes and retrieve them later using that name.  Like so:

import SpriteKit

class GameScene: SKScene {

override func didMoveToView(view: SKView) {

view.scene!.anchorPoint = CGPoint(x: 0.5,y: 0.5)

var sprite = SKSpriteNode(imageNamed:"sprite1.png")

sprite.name = "MyJetSprite"

sprite.position = CGPoint(x:0,y:0);

sprite.xScale = 4.0

sprite.yScale = 4.0

}

override func update(currentTime: NSTimeInterval) {

var sprite = self.childNodeWithName("MyJetSprite");

if(sprite != nil){

if(sprite.yScale > 0) {

sprite.yScale -= 0.1

sprite.xScale -= 0.1

}

else {

sprite.xScale = 8.0

sprite.yScale = 8.0

}

}

}

}

You can perform some pretty advanced searches, such as searching recursively through the tree by prefixing your name with “//“.  You can also search for patterns and receive multiple results.  We will look at this in more details later.

This part is starting to get a bit long so I am going to stop now.  The next part will look at more efficient ways of using Sprites, such as using an Atlas, as well as look at basic animation and whatever else I think to cover!