Subscribe to GameFromScratch on YouTube Support GameFromScratch on Patreon

8. July 2014

 

In this part of the LibGDX tutorial series we are going to take a look at using GLSL shaders.  GLSL standards for OpenGL Shader Language and since the move from a fixed to programmable graphics pipeline, Shader programming has become incredibly important.  In fact, every single thing rendered with OpenGL has at least a pair of shaders attached to it.  It’s been pretty transparent to you till this point because LibGDX mostly takes care of everything for you.  When you create a SpriteBatch object in LibGDX, it automatically creates a default vertex and fragment shader for you.  If you want more information on working with GLSL I put together the OpenGL Shader Programming Resource Round-up back in May.  It has all the information you should need to get up to speed with GLSL.  For more information on OpenGL in general, I also created this guide.

 

Render Pipeline Overview

 

To better understand the role of GL shaders, it’s good to have a basic understanding of how the modern graphics pipeline works.  This is the high level description I gave in PlayStation Mobile book, it’s not plagiarism because I’m the author. :)

 

A top-level view of how rendering occurs might help you understand the shader process. It all starts with the shader program, vertex buffers, texture coordinates, and so on being passed in to the graphics device. Then this information is sent off to a vertex shader, which can then transform that vertex, do lighting calculations and more (we will see this process shortly). The vertex shader is executed once for every vertex and a number of different values can be output from this process (these are the out attributes we saw in the shader earlier). Next the results are transformed, culled, and clipped to the screen, discarding anything that is not visible, then rasterized, which is the process of converting from vector graphics to pixel graphics, something that can be drawn to the screen.

The results of this process are fragments, which you can think of as "prospective pixels," and the fragment are passed in to the fragment shader. This is why they are called fragment shaders instead of pixel shaders, although people commonly refer to them using either expression. Once again, the fragment shader is executed once for each fragment. A fragment shader, unlike a vertex shader, can only return a single attribute, which is the RGBA color of the individual pixel. In the end, this is the value that will be displayed on the screen. It sounds like a horribly complex process, but the GPUs have dedicated hardware for performing exactly such operations, millions upon millions of times per second. That description also glossed over about a million tiny details, but that is the gist of how the process occurs.

 

So basically shaders are little programs that run over and over again on the data in your scene.  A vertex shader works on the vertices in your scene ( predictably enough… ) and are responsible for positioning each vertex in the world.  Generally this is a matter of transforming them using some kind of Matrix passed in from your program.  The output of the Vertex shader is ultimately passed to a Fragment shader.  Fragment shaders are basically, as I said above, prospective pixels.  These are the actual coloured dots that are going to be drawn on the users screen.  In the fragment shader you determine how this pixel will appear.  So basically a vertex shader is a little C-like program that is run for each vertex in your scene, while a fragment shader is run for each potential pixel.

 

There is one very important point to pause on here…  Fragment and Vertex shaders aren’t the only shaders in the modern graphics pipeline.  There are also Geometry shaders.  While vertex shaders can modify geometry ( vertices ), Geometry shaders actually create new geometry.  Geometry shaders were added in OpenGL 3.2 and D3D10.  Then in OpenGL4/D3D11 Tessellation shaders were added.  Tessellation is the process of sub-dividing a surface to add more detail, moving this process to silicon makes it viable to create much lower detailed meshes and tessellate them on the fly.  So, why are we only talking about Fragment and Vertex shaders?  Portability.  Right now OpenGL ES and WebGL do not support any other shaders.  So if you want to support mobile or WebGL, you can’t use these other shader types.

 

SpriteBatch and default Shaders

 

As I said earlier, when you use SpriteBatch, it provides a default Vertex and Fragment shader for you.  Let’s take a look at each of them now.  Let’s do it in the order they occur, so let’s take a look at the vertex shader first:

 

attribute vec4 a_position;
attribute vec4 a_color;
attribute vec2 a_texCoord;

uniform mat4 u_projTrans;

varying vec4 v_color;
varying vec2 v_texCoords;

void main()
{
    v_color = a_color;
    v_color.a = v_color.a * (256.0/255.0);
    v_texCoords = a_texCoord + 0;
    gl_Position =  u_projTrans * a_position;
}

 

As I said, GLSL is a very C-like language, right down to including a main() function as the program entry point.  There are a few things to be aware of here.  First are attribute and uniform  variables.  These are variables that are passed in from your source code.  LibGDX takes care of most of these for you, but if you are going to write your own default shader, LibGDX expects all of them to exist.  So then, what is the difference between a uniform and attribute variable?  A uniform stays the same for every single vertex.  Attributes on the other hand can vary from vertex to vertex.  Obviously this can have performance implications, so if it makes sense, prefer using a uniform.  A varying value on the other hand can be thought of as the return value, these values will be passed on down the rendering pipeline ( meaning the fragment shader has access to them ).  As you can see from the use of gl_Position, OpenGL also has some built in values.  For vertex shaders there are gl_Position and gl_PointSize.  Think of these as uniform variables provided by OpenGL itself.  gl_Position is ultimately the position of your vertex in the world.

 

As to what this script does, it mostly just prepares a number of variables for the fragment shader, the color, the normalized ( 0 to 1 ) alpha value and the texture to bind to, in this case texture unit 0.  This is set by calling Texture.Bind() in your code, or is called by LibGDX for you.  Finally it positions the vertex in 3D space by multiplying the vertices position by the transformation you passed in as u_projTrans.

 

Now let’s take a quick look at the default fragment shader:

#ifdef GL_ES
#define LOWP lowp
    precision mediump float;
#else
    #define LOWP
#endif

varying LOWP vec4 v_color;
varying vec2 v_texCoords;

uniform sampler2D u_texture;

void main()
{
    gl_FragColor = v_color * texture2D(u_texture, v_texCoords);
}

 

As you can see, the format is very similar.  The ugly #ifdef allows this code to work on both mobile and higher end desktop machines.  Essentially if you are running OpenGL ES then the value of LOWP is defined as lowp, and precision is set to medium.  In real world terms, this means that GL ES will run at a lower level of precision for internal calculations, both speeding things up and slightly degrading the result. 

The values v_color and v_texCoords were provided by the vertex shader.  A sampler2D on the other hand is a special glsl datatype for accessing the texture bound to the shader.  gl_FragColor is another special built in variable ( like vertex shaders, fragment shaders have some GL provided variables, many more than Vertex shaders in fact ), this one represents the output color of the pixel the fragment shader is evaluating.  texture2D essentially returns a vec4 value representing the pixel at UV coordinate v_texCoords in texture u_texture.  The vec4 represents the RGBA values of the pixel, so for example (1.0,0.0,0.0,0.5) is a 50% transparent red pixel.  The value assigned to gl_FragColor is ultimately the color value of the pixel displayed on your screen.

 

Of course a full discussion on GLSL shaders is wayyy beyond the scope of this document.  Again if you need more information I suggest you start here.  I am also no expert on GLSL, so you are much better off learning the details from someone else! :)  This does however give you a peek behind the curtain at what LibGDX is doing each frame and is going to be important to us in just a moment.

 

Changing the Default Shader

 

There comes a time where you might want to alter the default shader and replace it with one of your own.  This process is actually quite simple, let’s take a look.  Let’s say for some reason you wanted to render your game entirely in black and white?  Here are a simple vertex and fragment shader combo that will do exactly this:

 

Vertex shader:

attribute vec4 a_position;
attribute vec4 a_color;
attribute vec2 a_texCoord0;

uniform mat4 u_projTrans;

varying vec4 v_color;
varying vec2 v_texCoords;

void main() {
    v_color = a_color;
    v_texCoords = a_texCoord0;
    gl_Position = u_projTrans * a_position;
}

Fragment shader:

#ifdef GL_ES
    precision mediump float;
#endif

varying vec4 v_color;
varying vec2 v_texCoords;
uniform sampler2D u_texture;
uniform mat4 u_projTrans;

void main() {
        vec3 color = texture2D(u_texture, v_texCoords).rgb;
        float gray = (color.r + color.g + color.b) / 3.0;
        vec3 grayscale = vec3(gray);

        gl_FragColor = vec4(grayscale, 1.0);
}

I saved each file as vertex.glsl and shader.glsl respectively, to the project assets directory.  The shaders are extremely straight forward.  The Vertex is in fact just the default vertex shader from LibGDX.  Once again remember you need to provide certain values for SpriteBatch to work… don’t worry, things will blow up and tell you if they are missing from your shader! :)  The fragment shader is simply sampling the RGB value of the current texture pixel, getting the “average” value of the RGB values and using that as the output value.

 

Enough with shader code, let’s take a look at the LibGDX code now:

package com.gamefromscratch;

import com.badlogic.gdx.ApplicationAdapter;
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.graphics.GL20;
import com.badlogic.gdx.graphics.Texture;
import com.badlogic.gdx.graphics.g2d.Sprite;
import com.badlogic.gdx.graphics.g2d.SpriteBatch;
import com.badlogic.gdx.graphics.glutils.ShaderProgram;

public class ShaderTestApp extends ApplicationAdapter {
    SpriteBatch batch;
    Texture img;
    Sprite sprite;
    String vertexShader;
    String fragmentShader;
    ShaderProgram shaderProgram;

    @Override
    public void create () {
        batch = new SpriteBatch();
        img = new Texture("badlogic.jpg");
        sprite = new Sprite(img);
        sprite.setSize(Gdx.graphics.getWidth(), Gdx.graphics.getHeight());

        vertexShader = Gdx.files.internal("vertex.glsl").readString();
        fragmentShader = Gdx.files.internal("fragment.glsl").readString();
        shaderProgram = new ShaderProgram(vertexShader,fragmentShader);
    }

    @Override
    public void render () {
        Gdx.gl.glClearColor(1, 0, 0, 1);
        Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
        batch.begin();
        batch.setShader(shaderProgram);
        batch.draw(sprite,sprite.getX(),sprite.getY(),sprite.getWidth(),sprite.getHeight());
        batch.end();
    }
}

 

And when you run it:

image

 

Tada, your output is grayscale!

As to what we are doing in that code, we load each shader file as a string.  When then create a new ShaderProgram passing in a vertex and fragment shader.  The ShaderProgram is the class the populates all the various variables that your shaders expect, bridging the divide between the Java world and the GLSL world.  Then in render() we set our ShaderProgram as active by calling setShader().  Truth is, we could have done this just once in the create method instead of once per frame.

 

Multiple Shaders per Frame

 

In the above example, when we set the shader program, it applied to all of the output.  That’s nice if you want to render the entire world in black and white, but what if you just wanted to render a single sprite using your shader?  Well fortunately that is pretty easy, you simply change the shader again.  Consider:

package com.gamefromscratch;

import com.badlogic.gdx.ApplicationAdapter;
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.graphics.GL20;
import com.badlogic.gdx.graphics.Texture;
import com.badlogic.gdx.graphics.g2d.Sprite;
import com.badlogic.gdx.graphics.g2d.SpriteBatch;
import com.badlogic.gdx.graphics.glutils.ShaderProgram;

public class ShaderTest2 extends ApplicationAdapter {
    SpriteBatch batch;
    Texture img;
    Sprite leftSprite;
    Sprite rightSprite;
    String vertexShader;
    String fragmentShader;
    ShaderProgram shaderProgram;

    @Override
    public void create () {
        batch = new SpriteBatch();
        img = new Texture("badlogic.jpg");
        leftSprite = new Sprite(img);
        rightSprite = new Sprite(img);

        leftSprite.setSize(Gdx.graphics.getWidth()/2, Gdx.graphics.getHeight());
        leftSprite.setPosition(0,0);
        rightSprite.setSize(Gdx.graphics.getWidth()/2, Gdx.graphics.getHeight());
        rightSprite.setPosition(Gdx.graphics.getWidth()/2,0);

        vertexShader = Gdx.files.internal("vertex.glsl").readString();
        fragmentShader = Gdx.files.internal("fragment.glsl").readString();
        shaderProgram = new ShaderProgram(vertexShader,fragmentShader);
    }

    @Override
    public void render () {
        Gdx.gl.glClearColor(1, 0, 0, 1);
        Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);

        batch.setShader(null);
        batch.begin();
        batch.draw(leftSprite, leftSprite.getX(), leftSprite.getY(), leftSprite.getWidth(), leftSprite.getHeight());
        batch.end();

        batch.setShader(shaderProgram);
        batch.begin();
        batch.draw(rightSprite, rightSprite.getX(), rightSprite.getY(), rightSprite.getWidth(), rightSprite.getHeight());
        batch.end();
    }
}

 

And when you run it:

image

 

One using the default shader, one sprite rendered using the black and white shader.  As you can see, it’s simply a matter of calling setShader() multiple times.  Calling setShader() but passing in null restores the default built-in shader.  However, each time you call setShader() there is a fair amount of setup done behind the scenes, so you want to minimize the number of times you want to call it.  Or…

 

Setting Shader on a Mesh Object

 

Each Mesh object in LibGDX has it’s own ShaderProgram.  Behind the scenes SpriteBatch is actually creating a large single Mesh out of all the sprites in your screen, which are ultimately just textured quads.  So if you have a game object that needs fine tune shader control, you may consider rolling your own Mesh object.  Let’s take a look at such an example:

package com.gamefromscratch;

import com.badlogic.gdx.ApplicationAdapter;
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.graphics.*;
import com.badlogic.gdx.graphics.g2d.Sprite;
import com.badlogic.gdx.graphics.g2d.SpriteBatch;
import com.badlogic.gdx.graphics.glutils.ShaderProgram;

public class MeshShaderApp extends ApplicationAdapter {
    SpriteBatch batch;
    Texture texture;
    Sprite sprite;
    Mesh mesh;
    ShaderProgram shaderProgram;

    @Override
    public void create () {
        batch = new SpriteBatch();
        texture = new Texture("badlogic.jpg");
        sprite = new Sprite(texture);
        sprite.setSize(Gdx.graphics.getWidth(),Gdx.graphics.getHeight());

        float[] verts = new float[30];
        int i = 0;
        float x,y; // Mesh location in the world
        float width,height; // Mesh width and height

        x = y = 50f;
        width = height = 300f;

        //Top Left Vertex Triangle 1
        verts[i++] = x;   //X
        verts[i++] = y + height; //Y
        verts[i++] = 0;    //Z
        verts[i++] = 0f;   //U
        verts[i++] = 0f;   //V

        //Top Right Vertex Triangle 1
        verts[i++] = x + width;
        verts[i++] = y + height;
        verts[i++] = 0;
        verts[i++] = 1f;
        verts[i++] = 0f;

        //Bottom Left Vertex Triangle 1
        verts[i++] = x;
        verts[i++] = y;
        verts[i++] = 0;
        verts[i++] = 0f;
        verts[i++] = 1f;

        //Top Right Vertex Triangle 2
        verts[i++] = x + width;
        verts[i++] = y + height;
        verts[i++] = 0;
        verts[i++] = 1f;
        verts[i++] = 0f;

        //Bottom Right Vertex Triangle 2
        verts[i++] = x + width;
        verts[i++] = y;
        verts[i++] = 0;
        verts[i++] = 1f;
        verts[i++] = 1f;

        //Bottom Left Vertex Triangle 2
        verts[i++] = x;
        verts[i++] = y;
        verts[i++] = 0;
        verts[i++] = 0f;
        verts[i] = 1f;

        // Create a mesh out of two triangles rendered clockwise without indices
        mesh = new Mesh( true, 6, 0,
                new VertexAttribute( VertexAttributes.Usage.Position, 3, ShaderProgram.POSITION_ATTRIBUTE ),
                new VertexAttribute( VertexAttributes.Usage.TextureCoordinates, 2, ShaderProgram.TEXCOORD_ATTRIBUTE+"0" ) );

        mesh.setVertices(verts);

        shaderProgram = new ShaderProgram(
                Gdx.files.internal("vertex.glsl").readString(),
                Gdx.files.internal("fragment.glsl").readString()
                );
    }

    @Override
    public void render () {

        Gdx.gl20.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
        Gdx.gl20.glClearColor(0.2f, 0.2f, 0.2f, 1);
        Gdx.gl20.glClear(GL20.GL_COLOR_BUFFER_BIT);
        Gdx.gl20.glEnable(GL20.GL_TEXTURE_2D);
        Gdx.gl20.glEnable(GL20.GL_BLEND);
        Gdx.gl20.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA);

        batch.begin();
        sprite.draw(batch);
        batch.end();

        texture.bind();
        shaderProgram.begin();
        shaderProgram.setUniformMatrix("u_projTrans", batch.getProjectionMatrix());
        shaderProgram.setUniformi("u_texture", 0);
        mesh.render(shaderProgram, GL20.GL_TRIANGLES);
        shaderProgram.end();
    }
}

And when you run it:

 

image

This sample is long but fairly simple.  In create() we create the geometry for a quad by defining 2 triangles.  We then load our ShaderProgram just like we did in the earlier example.  You may notice in creating the Mesh we define two VertexAttribute values and bind them to values within our ShaderProgram.  These are the input values into the shader.  Unlike with SpriteBatch and the default shader, you need to do a bit more of the behind the scenes work when rolling your own Mesh.

 

Then in render() you see we work with the SpriteBatch normally but then draw our Mesh object using Mesh.render, passing in the ShaderProgram.  Texture.bind() is what binds the texture from LibGDX to texture unit 0 in the GLSL shader.  We then pass in our required uniform values using setUniformMatrix and setUniformi ( as in int ).  This is how you set up uniform values from the Java side of the fence.  u_texture is saying which texture unit to use, while u_projTrans is the transformation matrix for positioning items within our world.  In this case we are simply using the projection matrix from the SpriteBatch.

 

Using a Mesh instead of a Sprite has some disadvantages however.  When working with Sprites, all geometry is batched into a single object and this is good for performance.  More importantly, with Mesh you need to roll all the functionality you need from Sprite as you need it.  For example, if you want to support scaling or rotation, you need to provide that functionality.

Programming , ,

26. June 2014

 

Since LibGDX 1.1, there has been a massively improved way of creating a LibGDX project.  This document looks at how you get up and started using the setup application.

 

First thing you have to do is download it.  Head on over to the LibGDX download page and download Setup App.

 

Libgdx1

 

Now that you’ve downloaded the file gdx-setup.jar, double click it and say OK to any security questions.  If it doesn’t load when you double click it, you either don’t have Java installed ( you need a JDK to work with LibGDX ) or your file extensions handling for .java files are broken.  Assuming everything worked correctly the setup app should have loaded.

 

Here is an example I’ve configured already:

Libgdx2

 

One important thing is you need to have the Android SDK installed as well.  Google have made the process rather confusing, as their default download is bundled with Eclipse.  If you are intending to use a different IDE, the download you want is under “Download for other Platforms”, which is an extremely stupid name…  anyways, you need to have the Android SDK ( Android Build Tools version 19.1 as of time of writing, keep in mind you have to configure the Android SDK after downloading it ) installed as you have to provide the path to where you installed it.  

 

Otherwise it’s the usual suspects here, your project name, package and class.  Keep in mind they follow Java variable naming conventions, so for example, no spaces and mostly just alphanumeric characters.  LibGDX version allows you to choose which version of LibGDX to use, although oddly enough in every release there has only ever been the option for the most recent release.

 

In the Sub Projects section, you are deciding what targets you want to support.  If you aren’t on a Mac, iOS will not be an option.  

 

Extensions are where you configure which of the LibGDX extensions you want installed/configured.  Bullet is a physics engine, free type is a font engine, tools give you the various command line tools that are bundled with LibGDX ( I believe ), Controllers is gamepad support, Box2d is a 2D physics engine while Box2dlights is… well I have no idea what that is.

 

Now the part that is of critical importance is “Advanced”.  In all honest I think the “Advanced” options are far more important than the extensions.  I would actually instead suggest making the Setup app a 2 step process and make Advanced a mandatory process.  Anyways enough about my opinions, click advanced!

LibGdx3

 

This is where you pick what kind of IDE you use.  For each you intend to use, tick the checkbox and it will generate the appropriate project files.  In this case I am going to be using IntelliJ IDEA, so I will tick next to IDEA.  The Maven mirror box is probably something you will never use so let’s ignore it.  Offline mode is a bit more difficult to decide.  Basically the project you create will automatically go online and figure out what dependencies your project has and make sure your project is always up to date.  If you aren’t going to have an internet connection reliably or yours is ungodly slow, you may consider ticking Offline Mode.  Otherwise you are probably best served leaving it unticked.  Now click Save.

 

Now back in the Setup app, click Generate:

LibGdx4

 

Now the setup app will churn away for a while, downloading required files and configuring your project. If there is something wrong with your configuration, you will be warned.  On my PC using my cell connection the process took about a minute.  If successful it should look like this:

LibGDX5

 

There is still some configuration to be done for running each project type in your IDE.  I have already covered that process in IntelliJ right here.  You only need to read the last half of that link.

 

Programming ,

22. June 2014

 

Title kinda says it all, LibGDX 1.2 was released today.  The update includes:

 

 

I don’t think the breaking changes will impact any of our tutorial series.  Interesting to see the new AI extension, will be nice to see how that develops ( A* next? ).  Improved IntelliJ build times are certainly welcome.  You can get started here.

News ,

4. May 2014

 

You know one of the things that suck about working in Java?

 

Java.

 

Or more precisely the Java runtime.  The need to have the Java runtime pre-installed in order to run your game is a bit of a pain.  Fortunately Badlogicgames, the people behind LibGDX have released Packr.

 

In their own words:

Packages your JAR, assets and a JVM for distribution on Windows (ZIP), Linux (ZIP) and Mac OS X (.app), adding a native executable file to make it appear like the app is a native app. Packr is most suitable for GUI applications.

 

To be honest, that explanation kinda sucks, their description on Reddit is much better.

This has been a pain point for a few of our users, though Minecraft made pretty much anyone on this planet install Java anyways :)

I wrote a little tool called packr which does the following:

  • bundle your jar/assets with an embedded JRE
  • add a native executable for the respective platform to make your app look native
  • minimize the JRE (zipped: 23mb, extracted 40mb)

This should make the user experience a bit better, as there aren't any things the user needs to have installed before trying your game/app. All they need to do is download a ZIP/App Bundle and execute the native executable.

It's very easy to use, either via CLI arguments, a JSON config file, or by invoking it directly from code (it's really a library).

 

There are a couple limitations right now:

    • Icons aren't set yet on any platform, need to do that manually.
    • Windows is 32-bit only, Linux is 64-bit only, Mac OS X is 64-bit only
    • JRE minimization is very conservative, depending on your app, you can carve out stuff from a JRE yourself, disable minimization and pass your custom JRE to packr

 

If this sounds useful to you, you can learn more at the Packr github page.

News ,

1. May 2014

 

In the prior tutorial we got a simple top down tile map working.  Now we want to add some character to it, literally.  Depending on your needs this can be laughably simple or somewhat complex.  Let’s take a look at a basic example.  This is building on our previous code example and if you’ve worked through the prior tutorials in this series, there is nothing new here yet.

 

package com.gamefromscratch;

import com.badlogic.gdx.ApplicationAdapter;
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.Input;
import com.badlogic.gdx.InputProcessor;
import com.badlogic.gdx.graphics.GL20;
import com.badlogic.gdx.graphics.OrthographicCamera;
import com.badlogic.gdx.graphics.Texture;
import com.badlogic.gdx.graphics.g2d.Sprite;
import com.badlogic.gdx.graphics.g2d.SpriteBatch;
import com.badlogic.gdx.maps.tiled.TiledMap;
import com.badlogic.gdx.maps.tiled.TiledMapRenderer;
import com.badlogic.gdx.maps.tiled.TmxMapLoader;
import com.badlogic.gdx.maps.tiled.renderers.OrthogonalTiledMapRenderer;

public class TiledTest extends ApplicationAdapter implements InputProcessor {
    Texture img;
    TiledMap tiledMap;
    OrthographicCamera camera;
    TiledMapRenderer tiledMapRenderer;
    SpriteBatch sb;
    Texture texture;
    Sprite sprite;
    
    @Override public void create () {
        float w = Gdx.graphics.getWidth();
        float h = Gdx.graphics.getHeight();

        camera = new OrthographicCamera();
        camera.setToOrtho(false,w,h);
        camera.update();
        tiledMap = new TmxMapLoader().load("MyCrappyMap.tmx");
        tiledMapRenderer = new OrthogonalTiledMapRenderer(tiledMap);
        Gdx.input.setInputProcessor(this);

        sb = new SpriteBatch();
        texture = new Texture(Gdx.files.internal("pik.png"));
        sprite = new Sprite(texture);
    }

    @Override public void render () {
        Gdx.gl.glClearColor(1, 0, 0, 1);
        Gdx.gl.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA);
        Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
        camera.update();
        tiledMapRenderer.setView(camera);
        tiledMapRenderer.render();
        sb.begin();
        sprite.draw(sb);
        sb.end();
    }

    @Override public boolean keyDown(int keycode) {
        return false;
    }

    @Override public boolean keyUp(int keycode) {
        if(keycode == Input.Keys.LEFT)
            camera.translate(-32,0);
        if(keycode == Input.Keys.RIGHT)
            camera.translate(32,0);
        if(keycode == Input.Keys.UP)
            camera.translate(0,-32);
        if(keycode == Input.Keys.DOWN)
            camera.translate(0,32);
        if(keycode == Input.Keys.NUM_1)
            tiledMap.getLayers().get(0).setVisible(!tiledMap.getLayers().get(0).isVisible());
        if(keycode == Input.Keys.NUM_2)
            tiledMap.getLayers().get(1).setVisible(!tiledMap.getLayers().get(1).isVisible());
        return false;
    }

    @Override public boolean keyTyped(char character) {

        return false;
    }

    @Override public boolean touchDown(int screenX, int screenY, int pointer, int button) {
        return false;
    }

    @Override public boolean touchUp(int screenX, int screenY, int pointer, int button) {
        return false;
    }

    @Override public boolean touchDragged(int screenX, int screenY, int pointer) {
        return false;
    }

    @Override public boolean mouseMoved(int screenX, int screenY) {
        return false;
    }

    @Override public boolean scrolled(int amount) {
        return false;
    }
}

 

 

So what exactly did we do?  Well first off, we added this sprite to the project in the android/asset folder.

pik

It might look somewhat familiar… 

In create() we allocate a sprite batch, load our texture and create a sprite with it.  Finally in render() after the map is drawn, we simply draw our sprite to the batch.  Order is of critical importance.  One other thing I should probably point out… I don’t dispose() of anything, so this code leaks like mad…  Now if you run the code you see:

image

 

So far so good! 

 

First problem we’ve got here is, how do we position our sprite within the world?  In this case I am simply drawing at the origin, but how would we go about positioning the sprite in the tile a user clicks?  Let’s go ahead and see!

 

Add the following to the touchDown handler:

public boolean touchDown(int screenX, int screenY, int pointer, int button) {
    Vector3 clickCoordinates = new Vector3(screenX,screenY,0);
    Vector3 position = camera.unproject(clickCoordinates);
    sprite.setPosition(position.x, position.y);
    return true;
}

 

Now we need to make a small change to our sprite batch.  We need to apply our camera transformations to it, so when we scroll the screen around using the arrow keys, our sprite stays put.  This is easily accomplished in render():

public void render () {
    Gdx.gl.glClearColor(1, 0, 0, 1);
    Gdx.gl.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA);
    Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
    camera.update();
    tiledMapRenderer.setView(camera);
    tiledMapRenderer.render();
    sb.setProjectionMatrix(camera.combined);
    sb.begin();
    sprite.draw(sb);
    sb.end();
}

 

Now when you run it, the sprite will draw where you click in the world:

image

 

There’s a small possible problem here… what if you wanted your sprite to be BEHIND those rocks, or items in the higher level layers? 

 

There are a couple of ways to do this.  First you could actually create a layer for player sprites.  You could simply create a layer in Tiled that you kept empty and add your sprite into it.  Here instead I am going to do so programmatically.  Let me make something perfectly clear, this is most certainly *NOT* how you would do it, but it does give an interesting bit of insight into how tiles and layers are composed.

public void create () {
    float w = Gdx.graphics.getWidth();
    float h = Gdx.graphics.getHeight();

    camera = new OrthographicCamera();
    camera.setToOrtho(false,w,h);
    camera.update();
    tiledMap = new TmxMapLoader().load("MyCrappyMap.tmx");
    tiledMapRenderer = new OrthogonalTiledMapRenderer(tiledMap);
    Gdx.input.setInputProcessor(this);

    sb = new SpriteBatch();
    texture = new Texture(Gdx.files.internal("pik.png"));
    sprite = new Sprite(texture);

    
    // Get the width and height of our maps
    // Then halve it, as our sprites are 64x64 not 32x32 that our map is made of
    int mapWidth = tiledMap.getProperties().get("width",Integer.class)/2;
    int mapHeight = tiledMap.getProperties().get("height",Integer.class)/2;

    // Create a new map layer
    TiledMapTileLayer tileLayer = new TiledMapTileLayer(mapWidth,mapHeight,64,64);
    
    // Create a cell(tile) to add to the layer
    TiledMapTileLayer.Cell cell = new TiledMapTileLayer.Cell();
    
    // The sprite/tilesheet behind our new layer is a single image (our sprite)
    // Create a TextureRegion that is the entire size of our texture
    TextureRegion textureRegion = new TextureRegion(texture,64,64);
    
    // Now set the graphic for our cell to our newly created region
    cell.setTile(new StaticTiledMapTile(textureRegion));
    
    // Now set the cell at position 4,10 ( 8,20 in map coordinates ).  This is the position of a tree
    // Relative to 0,0 in our map which is the bottom left corner
    tileLayer.setCell(4,10,cell);

    // Ok, I admit, this part is a gross hack. 
    // Get the current top most layer from the map and store it
    MapLayer tempLayer = tiledMap.getLayers().get(tiledMap.getLayers().getCount()-1);
    // Now remove it
    tiledMap.getLayers().remove(tiledMap.getLayers().getCount()-1);
    // Now add our newly created layer
    tiledMap.getLayers().add(tileLayer);
    // Now add it back, now our new layer is not the top most one.
    tiledMap.getLayers().add(tempLayer);
}

 

And look, we are behind a tree!

image

 

 

The comments explain it all really.  We basically are programmatically creating a new Tile Layer within the map and positioning it below the top most layer.  You would never do it this way for a sprite for several reasons.  First you will only be able to move in cell by cell chunks ( aka, 64 pixels at a time ).  In certain maps that makes sense, like old school Ultima 4 type role playing games.  Next, to move you actually will have to remove the tile from it’s current location and add it to the new one.  Finally, it’s probably pretty slow to boot.  However if you want to dynamically populate a layer, say with power ups or things you create programmatically instead of using the level editor, this is how you can do it.

 

What about when dealing with the players sprite and moving it only a few pixels at a time?  Well that’s slightly more complicated.

 

One option you have, especially if you want to use the Sprite class, is to extend the map rendering class, in this case OrthogonalTiledMapRenderer then override the render method and render your sprite(s).  Here is a simple example:

package com.gamefromscratch;

import com.badlogic.gdx.graphics.g2d.Sprite;
import com.badlogic.gdx.maps.MapLayer;
import com.badlogic.gdx.maps.MapObject;
import com.badlogic.gdx.maps.tiled.TiledMap;
import com.badlogic.gdx.maps.tiled.TiledMapTileLayer;
import com.badlogic.gdx.maps.tiled.renderers.OrthogonalTiledMapRenderer;

import java.util.ArrayList;
import java.util.List;

public class OrthogonalTiledMapRendererWithSprites extends OrthogonalTiledMapRenderer {
    private Sprite sprite;
    private List<Sprite> sprites;
    private int drawSpritesAfterLayer = 1;

    public OrthogonalTiledMapRendererWithSprites(TiledMap map) {
        super(map);
        sprites = new ArrayList<Sprite>();
    }

    public void addSprite(Sprite sprite){
        sprites.add(sprite);
    }

    @Override
    public void render() {
        beginRender();
        int currentLayer = 0;
        for (MapLayer layer : map.getLayers()) {
            if (layer.isVisible()) {
                if (layer instanceof TiledMapTileLayer) {
                    renderTileLayer((TiledMapTileLayer)layer);
                    currentLayer++;
                    if(currentLayer == drawSpritesAfterLayer){
                        for(Sprite sprite : sprites)
                            sprite.draw(this.getSpriteBatch());
                    }
                } else {
                    for (MapObject object : layer.getObjects()) {
                        renderObject(object);
                    }
                }
            }
        }
        endRender();
    }
}

 

Now we simply use it in place of the other TileMapRenderer.  Not now that we no longer need a SpriteBatch, since we are using the map renderer’s.  Here’s the updated code:

package com.gamefromscratch;

import com.badlogic.gdx.ApplicationAdapter;
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.Input;
import com.badlogic.gdx.InputProcessor;
import com.badlogic.gdx.graphics.GL20;
import com.badlogic.gdx.graphics.OrthographicCamera;
import com.badlogic.gdx.graphics.Texture;
import com.badlogic.gdx.graphics.g2d.Sprite;
import com.badlogic.gdx.graphics.g2d.SpriteBatch;
import com.badlogic.gdx.maps.tiled.TiledMap;
import com.badlogic.gdx.maps.tiled.TmxMapLoader;
import com.badlogic.gdx.math.Vector3;

public class TiledTest extends ApplicationAdapter implements InputProcessor {
    Texture img;
    TiledMap tiledMap;
    OrthographicCamera camera;
    OrthogonalTiledMapRendererWithSprites tiledMapRenderer;
    Texture texture;
    Sprite sprite;
    
    @Override
    public void create () {
        float w = Gdx.graphics.getWidth();
        float h = Gdx.graphics.getHeight();

        camera = new OrthographicCamera();
        camera.setToOrtho(false,w,h);
        camera.update();

        texture = new Texture(Gdx.files.internal("pik.png"));
        sprite = new Sprite(texture);

        tiledMap = new TmxMapLoader().load("MyCrappyMap.tmx");
        tiledMapRenderer = new OrthogonalTiledMapRendererWithSprites(tiledMap);
        tiledMapRenderer.addSprite(sprite);
        Gdx.input.setInputProcessor(this);
    }

    @Override
    public void render () {
        Gdx.gl.glClearColor(1, 0, 0, 1);
        Gdx.gl.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA);
        Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
        camera.update();
        tiledMapRenderer.setView(camera);
        tiledMapRenderer.render();
    }

    @Override
    public boolean keyDown(int keycode) {
        return false;
    }

    @Override
    public boolean keyUp(int keycode) {
        if(keycode == Input.Keys.LEFT)
            camera.translate(-32,0);
        if(keycode == Input.Keys.RIGHT)
            camera.translate(32,0);
        if(keycode == Input.Keys.UP)
            camera.translate(0,-32);
        if(keycode == Input.Keys.DOWN)
            camera.translate(0,32);
        if(keycode == Input.Keys.NUM_1)
            tiledMap.getLayers().get(0).setVisible(!tiledMap.getLayers().get(0).isVisible());
        if(keycode == Input.Keys.NUM_2)
            tiledMap.getLayers().get(1).setVisible(!tiledMap.getLayers().get(1).isVisible());
        return false;
    }

    @Override
    public boolean keyTyped(char character) {

        return false;
    }

    @Override
    public boolean touchDown(int screenX, int screenY, int pointer, int button) {
        Vector3 clickCoordinates = new Vector3(screenX,screenY,0);
        Vector3 position = camera.unproject(clickCoordinates);
        sprite.setPosition(position.x, position.y);
        return true;
    }

    @Override
    public boolean touchUp(int screenX, int screenY, int pointer, int button) {
        return false;
    }

    @Override
    public boolean touchDragged(int screenX, int screenY, int pointer) {
        return false;
    }

    @Override
    public boolean mouseMoved(int screenX, int screenY) {
        return false;
    }

    @Override
    public boolean scrolled(int amount) {
        return false;
    }
}

 

Now when you run it, where you click the player sprite will be positioned correctly and behind objects:

image

.

Under this system however, you just treat your sprite normally.  Update it’s position and it’s updated in the game world.

 

There is one other possible way to support this functionality, that is a bit of a hybrid of both approaches.  Like the earlier example, we add a new layer in to the map, in this case we are going to do it in Tiled instead of programmatically.

 

In Tiled, create a new object layer and position it like so:

image

 

Now we are going to access it using the MapObject class, specifically the TextureMapObject ( a somewhat confusingly named class… ).  Once again this requires a custom TileRenderer implementation, this time overriding the renderObject class, like so:

package com.gamefromscratch;

import com.badlogic.gdx.maps.MapObject;
import com.badlogic.gdx.maps.objects.TextureMapObject;
import com.badlogic.gdx.maps.tiled.TiledMap;
import com.badlogic.gdx.maps.tiled.renderers.OrthogonalTiledMapRenderer;

public class OrthogonalTiledMapRendererWithSprites extends OrthogonalTiledMapRenderer {

    public OrthogonalTiledMapRendererWithSprites(TiledMap map) {
        super(map);
    }

    @Override
    public void renderObject(MapObject object) {
        if(object instanceof TextureMapObject) {
            TextureMapObject textureObj = (TextureMapObject) object;
                spriteBatch.draw(textureObj.getTextureRegion(), textureObj.getX(), textureObj.getY());
        }
    }
}

 

renderObject() is called to render the various well… MapObjects in the tilemap.  The default implementation is empty, so we need to provide one.  Here we simply check to see if our object is a TextureMapObject and if it is we draw it using the spriteBatch, much like before.

 

A side effect is we no longer use the Sprite class for positioning.  Instead we create a TextureRegion again and control position via the TextureMapObject setX/setY methods.  Let’s take a look at the result on our main application:

package com.gamefromscratch;

import com.badlogic.gdx.ApplicationAdapter;
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.Input;
import com.badlogic.gdx.InputProcessor;
import com.badlogic.gdx.graphics.GL20;
import com.badlogic.gdx.graphics.OrthographicCamera;
import com.badlogic.gdx.graphics.Texture;
import com.badlogic.gdx.graphics.g2d.Sprite;
import com.badlogic.gdx.graphics.g2d.SpriteBatch;
import com.badlogic.gdx.graphics.g2d.TextureRegion;
import com.badlogic.gdx.maps.MapLayer;
import com.badlogic.gdx.maps.objects.TextureMapObject;
import com.badlogic.gdx.maps.tiled.TiledMap;
import com.badlogic.gdx.maps.tiled.TiledMapRenderer;
import com.badlogic.gdx.maps.tiled.TmxMapLoader;

import com.badlogic.gdx.math.Vector3;

public class TiledTest extends ApplicationAdapter implements InputProcessor {
    Texture img;
    TiledMap tiledMap;
    OrthographicCamera camera;
    TiledMapRenderer tiledMapRenderer;
    SpriteBatch sb;
    Texture texture;
    Sprite sprite;
    MapLayer objectLayer;

    TextureRegion textureRegion;

    @Override
    public void create () {
        float w = Gdx.graphics.getWidth();
        float h = Gdx.graphics.getHeight();

        camera = new OrthographicCamera();
        camera.setToOrtho(false,w,h);
        camera.update();
        tiledMap = new TmxMapLoader().load("MyCrappyMap.tmx");
        tiledMapRenderer = new OrthogonalTiledMapRendererWithSprites(tiledMap);
        Gdx.input.setInputProcessor(this);
        texture = new Texture(Gdx.files.internal("pik.png"));

        objectLayer = tiledMap.getLayers().get("objects");
        textureRegion = new TextureRegion(texture,64,64);

        TextureMapObject tmo = new TextureMapObject(textureRegion);
        tmo.setX(0);
        tmo.setY(0);
        objectLayer.getObjects().add(tmo);
    }

    @Override
    public void render () {
        Gdx.gl.glClearColor(1, 0, 0, 1);
        Gdx.gl.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA);
        Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
        camera.update();
        tiledMapRenderer.setView(camera);
        tiledMapRenderer.render();
    }

    @Override
    public boolean keyDown(int keycode) {
        return false;
    }

    @Override
    public boolean keyUp(int keycode) {
        if(keycode == Input.Keys.LEFT)
            camera.translate(-32,0);
        if(keycode == Input.Keys.RIGHT)
            camera.translate(32,0);
        if(keycode == Input.Keys.UP)
            camera.translate(0,-32);
        if(keycode == Input.Keys.DOWN)
            camera.translate(0,32);
        if(keycode == Input.Keys.NUM_1)
            tiledMap.getLayers().get(0).setVisible(!tiledMap.getLayers().get(0).isVisible());
        if(keycode == Input.Keys.NUM_2)
            tiledMap.getLayers().get(1).setVisible(!tiledMap.getLayers().get(1).isVisible());
        return false;
    }

    @Override
    public boolean keyTyped(char character) {

        return false;
    }

    @Override
    public boolean touchDown(int screenX, int screenY, int pointer, int button) {
        Vector3 clickCoordinates = new Vector3(screenX,screenY,0);
        Vector3 position = camera.unproject(clickCoordinates);
        TextureMapObject character = (TextureMapObject)tiledMap.getLayers().get("objects").getObjects().get(0);
        character.setX((float)position.x);
        character.setY((float)position.y);
        return true;
    }

    @Override
    public boolean touchUp(int screenX, int screenY, int pointer, int button) {
        return false;
    }

    @Override
    public boolean touchDragged(int screenX, int screenY, int pointer) {
        return false;
    }

    @Override
    public boolean mouseMoved(int screenX, int screenY) {
        return false;
    }

    @Override
    public boolean scrolled(int amount) {
        return false;
    }
}

 

It’s mostly the same, except of course the different renderer, the fact we no longer use a Sprite object and in touchDown() we locate the TextureMapObject in the “objects” layer and update its positions.

 

 

Which method you use is up to you, your personal preference and the requirements of your game.  Myself, I find the second to be probably the most clean.  Of course if your sprite doesn’t need to care about tile depth at all, you can ignore all of this and just use SpriteBatch over top and save yourself many headaches!

Programming , , ,

Month List

Popular Comments