Subscribe to GameFromScratch on YouTube Support GameFromScratch on Patreon
16. April 2014

 

If you’ve never used Tiled read this first!

 

In this tutorial part we are going to look at loading orthogonal maps in LibGDX.  Orthogonal basically means “at a right angle to” which is a fancy way of saying the camera is looking straight at the scene.  Another much less fancy word for this style of game is “top down”.  Common examples include almost every single game from the 16Bit generation of consoles such as Zelda or MegaMan.

 

One of the first problems you are going to encounter is how do you create your maps?  One very common solution is the Tiled Map Editor which fortunately I just completed a tutorial on!  This tutorial assumes you know how to generate a TMX file, so if you haven’t be sure to go through the linked tutorial.  Fortunately LibGDX makes it very easy to work with TMX files.

 

First things first we need to copy the TMX and any/all tilemap image files used to your assets folder, like so:

image

You may notice unlike earlier tutorials I am currently using IntelliJ.  With the recent switch to Gradle it is much easier to get up and running in IntelliJ and I massively prefer it to Eclipse.  That said, everything I say is equally valid in Eclipse or IntelliJ and when there are differences, I will point them out.  If you want to get started with IntelliJ read here.

 

Alright, back on topic…

 

Now that you have the map and tilesets in your project, let’s jump right in with the code:

 

package com.gamefromscratch;

import com.badlogic.gdx.ApplicationAdapter;
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.Input;
import com.badlogic.gdx.InputProcessor;
import com.badlogic.gdx.graphics.GL20;
import com.badlogic.gdx.graphics.OrthographicCamera;
import com.badlogic.gdx.graphics.Texture;
import com.badlogic.gdx.maps.tiled.TiledMap;
import com.badlogic.gdx.maps.tiled.TiledMapRenderer;
import com.badlogic.gdx.maps.tiled.TmxMapLoader;
import com.badlogic.gdx.maps.tiled.renderers.OrthogonalTiledMapRenderer;

public class TiledTest extends ApplicationAdapter implements InputProcessor {
    Texture img;
    TiledMap tiledMap;
    OrthographicCamera camera;
    TiledMapRenderer tiledMapRenderer;
    
    @Override
    public void create () {
        float w = Gdx.graphics.getWidth();
        float h = Gdx.graphics.getHeight();

        camera = new OrthographicCamera();
        camera.setToOrtho(false,w,h);
        camera.update();
        tiledMap = new TmxMapLoader().load("MyCrappyMap.tmx");
        tiledMapRenderer = new OrthogonalTiledMapRenderer(tiledMap);
        Gdx.input.setInputProcessor(this);
    }

    @Override
    public void render () {
        Gdx.gl.glClearColor(1, 0, 0, 1);
        Gdx.gl.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA);
        Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
        camera.update();
        tiledMapRenderer.setView(camera);
        tiledMapRenderer.render();
    }

    @Override
    public boolean keyDown(int keycode) {
        return false;
    }

    @Override
    public boolean keyUp(int keycode) {
        if(keycode == Input.Keys.LEFT)
            camera.translate(-32,0);
        if(keycode == Input.Keys.RIGHT)
            camera.translate(32,0);
        if(keycode == Input.Keys.UP)
            camera.translate(0,-32);
        if(keycode == Input.Keys.DOWN)
            camera.translate(0,32);
        if(keycode == Input.Keys.NUM_1)
            tiledMap.getLayers().get(0).setVisible(!tiledMap.getLayers().get(0).isVisible());
        if(keycode == Input.Keys.NUM_2)
            tiledMap.getLayers().get(1).setVisible(!tiledMap.getLayers().get(1).isVisible());
        return false;
    }

    @Override
    public boolean keyTyped(char character) {

        return false;
    }

    @Override
    public boolean touchDown(int screenX, int screenY, int pointer, int button) {
        return false;
    }

    @Override
    public boolean touchUp(int screenX, int screenY, int pointer, int button) {
        return false;
    }

    @Override
    public boolean touchDragged(int screenX, int screenY, int pointer) {
        return false;
    }

    @Override
    public boolean mouseMoved(int screenX, int screenY) {
        return false;
    }

    @Override
    public boolean scrolled(int amount) {
        return false;
    }
}

 

When you run the code you should see your map.  Pressing the arrow keys will scroll around the map ( and show bright red when you’ve moved beyond the extents of your map ) .  Pressing 0 or 1 will toggle the visibility of each of the two layers in your map.  ( See the tutorial on Tiled if that makes no sense ).

 

image

 

The impressive thing here is how little code was required to accomplish so much.  In a nutshell it was basically just this:

 

camera = new OrthographicCamera();
camera.setToOrtho(false,w,h);
camera.update();
tiledMap = new TmxMapLoader().load("MyCrappyMap.tmx");
tiledMapRenderer = new OrthogonalTiledMapRenderer(tiledMap);

 

We create an OrthographicCamera, set it to the dimensions of the screen and update() it.  Next we load our map using TmxMapLoader.load() and create a OrthogonalTiledMapRenderer passing in our tiled map.

 

In our render method:

camera.update();
tiledMapRenderer.setView(camera);
tiledMapRenderer.render();

Basically update the camera ( in case we moved it using arrow keys ), pass it in to the TiledMapRenderer with setView() and finally render() the map.  That’s it.

As you can see in our key handler:

 

@Override
public boolean keyUp(int keycode) {
    if(keycode == Input.Keys.LEFT)
        camera.translate(-32,0);
    if(keycode == Input.Keys.RIGHT)
        camera.translate(32,0);
    if(keycode == Input.Keys.UP)
        camera.translate(0,-32);
    if(keycode == Input.Keys.DOWN)
        camera.translate(0,32);
    if(keycode == Input.Keys.NUM_1)
        tiledMap.getLayers().get(0).setVisible(!tiledMap.getLayers().get(0).isVisible());
    if(keycode == Input.Keys.NUM_2)
        tiledMap.getLayers().get(1).setVisible(!tiledMap.getLayers().get(1).isVisible());
    return false;
}

 

Navigating around the map is simply a matter of moving around the camera.  We move in 32pixel chunks because those are the size of our tiles.  So basically we move the camera left/right/up/down by one tile each time an arrow key is pressed.  In the event the 0 or 1 key are pressed we toggle the visibility of that particular layer.  As you can see, you can access the TileMap layers using the getLayers().get() function.

 

In the next part we will look at adding some more complex functionality.

 

On to part two.

Programming


21. January 2014

 

Back in this post I discussed ways of making dynamically equipped 2D sprites.  One way was to render out a 3D object to 2D textures dynamically.  So far we have looked at working in 3D in LibGDX, then exporting and rendering an animated model from Blender, the next step would be a dynamic 3D object to texture.  At first glance I thought this would be difficult, but reality is, it was stupidly simple.  In fact, the very first bit of code I wrote simply worked!  Well, except the results being upside down I suppose… details details…

 

Anyways, that is what this post discusses.  Taking a 3D scene in LibGDX and rendering in a 2D texture.  The code is just a modification of the code from the Blender to GDX post from a couple days back.

 

package com.gamefromscratch;

import com.badlogic.gdx.ApplicationListener;
import com.badlogic.gdx.Files.FileType;
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.InputProcessor;
import com.badlogic.gdx.graphics.GL10;
import com.badlogic.gdx.graphics.PerspectiveCamera;
import com.badlogic.gdx.graphics.Pixmap.Format;
import com.badlogic.gdx.graphics.Texture;
import com.badlogic.gdx.graphics.g2d.SpriteBatch;
import com.badlogic.gdx.graphics.g2d.TextureRegion;
import com.badlogic.gdx.graphics.g3d.Environment;
import com.badlogic.gdx.graphics.g3d.Model;
import com.badlogic.gdx.graphics.g3d.ModelBatch;
import com.badlogic.gdx.graphics.g3d.ModelInstance;
import com.badlogic.gdx.graphics.g3d.attributes.ColorAttribute;
import com.badlogic.gdx.graphics.g3d.loader.G3dModelLoader;
import com.badlogic.gdx.utils.UBJsonReader;
import com.badlogic.gdx.graphics.g3d.utils.AnimationController;
import com.badlogic.gdx.graphics.g3d.utils.AnimationController.AnimationDesc;
import com.badlogic.gdx.graphics.g3d.utils.AnimationController.AnimationListener;
import com.badlogic.gdx.graphics.glutils.FrameBuffer;



public class ModelTest implements ApplicationListener, InputProcessor {
    private PerspectiveCamera camera;
    private ModelBatch modelBatch;
    private Model model;
    private ModelInstance modelInstance;
    private Environment environment;
    private AnimationController controller;
    private boolean screenShot = false;
    private FrameBuffer frameBuffer;
    private Texture texture = null;
    private TextureRegion textureRegion;
    private SpriteBatch spriteBatch;
    
    @Override
    public void create() {        
        // Create camera sized to screens width/height with Field of View of 75 degrees
        camera = new PerspectiveCamera(
                75,
                Gdx.graphics.getWidth(),
                Gdx.graphics.getHeight());
        
        // Move the camera 5 units back along the z-axis and look at the origin
        camera.position.set(0f,0f,7f);
        camera.lookAt(0f,0f,0f);
        
        // Near and Far (plane) represent the minimum and maximum ranges of the camera in, um, units
        camera.near = 0.1f; 
        camera.far = 300.0f;

        // A ModelBatch is like a SpriteBatch, just for models.  Use it to batch up geometry for OpenGL
        modelBatch = new ModelBatch();
        
        // Model loader needs a binary json reader to decode
        UBJsonReader jsonReader = new UBJsonReader();
        // Create a model loader passing in our json reader
        G3dModelLoader modelLoader = new G3dModelLoader(jsonReader);
        // Now load the model by name
        // Note, the model (g3db file ) and textures need to be added to the assets folder of the Android proj
        model = modelLoader.loadModel(Gdx.files.getFileHandle("data/benddemo.g3db", FileType.Internal));
        // Now create an instance.  Instance holds the positioning data, etc of an instance of your model
        modelInstance = new ModelInstance(model);

        //move the model down a bit on the screen ( in a z-up world, down is -z ).
        modelInstance.transform.translate(0, 0, -2);

        // Finally we want some light, or we wont see our color.  The environment gets passed in during
        // the rendering process.  Create one, then create an Ambient ( non-positioned, non-directional ) light.
        environment = new Environment();
        environment.set(new ColorAttribute(ColorAttribute.AmbientLight, 0.8f, 0.8f, 0.8f, 1.0f));
        
        // You use an AnimationController to um, control animations.  Each control is tied to the model instance
        controller = new AnimationController(modelInstance);  
        // Pick the current animation by name
        controller.setAnimation("Bend",1, new AnimationListener(){

            @Override
            public void onEnd(AnimationDesc animation) {
                // this will be called when the current animation is done. 
                // queue up another animation called "balloon". 
                // Passing a negative to loop count loops forever.  1f for speed is normal speed.
                controller.queue("Balloon",-1,1f,null,0f);
            }

            @Override
            public void onLoop(AnimationDesc animation) {
                // TODO Auto-generated method stub
                
            }
            
        });
        
        frameBuffer = new FrameBuffer(Format.RGB888,Gdx.graphics.getWidth(),Gdx.graphics.getHeight(),false);
        Gdx.input.setInputProcessor(this);
        
        spriteBatch = new SpriteBatch();
    }

    @Override
    public void dispose() {
        modelBatch.dispose();
        model.dispose();
    }

    @Override
    public void render() {
        // You've seen all this before, just be sure to clear the GL_DEPTH_BUFFER_BIT when working in 3D
        Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
        Gdx.gl.glClearColor(1, 1, 1, 1);
        Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);

        // When you change the camera details, you need to call update();
        // Also note, you need to call update() at least once.
        camera.update();
        
        // You need to call update on the animation controller so it will advance the animation.  Pass in frame delta
        controller.update(Gdx.graphics.getDeltaTime());
        
        
        // If the user requested a screenshot, we need to call begin on our framebuffer
        // This redirects output to the framebuffer instead of the screen.
        if(screenShot)
            frameBuffer.begin();

        // Like spriteBatch, just with models!  pass in the box Instance and the environment
        modelBatch.begin(camera);
        modelBatch.render(modelInstance, environment);
        modelBatch.end();
        
        // Now tell OpenGL that we are done sending graphics to the framebuffer
        if(screenShot)
        {
            frameBuffer.end();
            // get the graphics rendered to the framebuffer as a texture
            texture = frameBuffer.getColorBufferTexture();
            // welcome to the wonderful world of different coordinate systems!
            // simply put, the framebuffer is upside down to normal textures, so we have to flip it
            // Use a TextureRegion to do so
            textureRegion = new TextureRegion(texture);
            // and.... FLIP!  V (vertical) only
            textureRegion.flip(false, true);
        }
        
        // In the case that we have a texture object to actually draw, we do so
        // using the old familiar SpriteBatch to do so.
        if(texture != null)
        {
            spriteBatch.begin();
            spriteBatch.draw(textureRegion,0,0);
            spriteBatch.end();
            screenShot = false;
        }
    }

    @Override
    public void resize(int width, int height) {
    }

    @Override
    public void pause() {
    }

    @Override
    public void resume() {
    }

    @Override
    public boolean keyDown(int keycode) {
        // TODO Auto-generated method stub
        return false;
    }

    @Override
    public boolean keyUp(int keycode) {
        // TODO Auto-generated method stub
        return false;
    }

    @Override
    public boolean keyTyped(char character) {
        // If the user hits a key, take a screen shot.
        this.screenShot = true;
        return false;
    }

    @Override
    public boolean touchDown(int screenX, int screenY, int pointer, int button) {
        // TODO Auto-generated method stub
        return false;
    }

    @Override
    public boolean touchUp(int screenX, int screenY, int pointer, int button) {
        // TODO Auto-generated method stub
        return false;
    }

    @Override
    public boolean touchDragged(int screenX, int screenY, int pointer) {
        // TODO Auto-generated method stub
        return false;
    }

    @Override
    public boolean mouseMoved(int screenX, int screenY) {
        // TODO Auto-generated method stub
        return false;
    }

    @Override
    public boolean scrolled(int amount) {
        // TODO Auto-generated method stub
        return false;
    }
}

And… that’s it.

Run the code and you get the exact same results as the last example:

blenderAnimation

 

However, if you press any key, the current frame is saved to a texture, and that is instead displayed on screen.  Press a key again and it will update to the current frame:

image

Of course, the background colour is different, because we didn’t implicitly set one.  The above a is LibGDX Texture object, which can now be treated as a 2D sprite, used as a texture map, whatever.

 

The code is ultra simple.  We have a toggle variable screenShot, that gets set if a user hits a key.  The actual process of rendering to texture is done with the magic of FrameBuffersl  Think of a framebuffer as a place for OpenGL to render other than your video card.  So instead of drawing the graphics to the screen, it instead draws the graphics to a memory buffer.  We then get this memory buffer as a texture using getColorBufferTexture().  The only complication is the frame buffer is rendered upside down.  This is easily fixed by wrapping the Texture in a TextureRegion and flipping the V coordinate.  Finally we display our newly generated texture using our old friend, the SpriteBatch.

 

 

Gotta love it when something you expect to be difficult ends up being ultra easy.  Next I have to measure the performance, so see if this is something that can be done in a realtime situation, or do we need to do this onload/change?

Programming


17. January 2014

 

OK, right up front I’ll admit I was a bit asleep at the wheel on this bit of news, as it actually happened last week…

 

Anyways, Havok have released the 2D Toolkit for ProjectAnarchy.  Project Anarchy is a bundling of Havok game development tools for making mobile games, all completely free ( as in beer ).  They also somewhat recently announced indie friendly pricing for supporting desktop targets.  The core of Project Anarchy is the Vision engine, as well as several dedicated libraries Havok makes for Physics, AI, Animation, etc.  Many of these libraries are routinely used in AAA games.  If alt textyou are interested in learning more, I created a tutorial series shortly after PA was released.

 

The 2D toolkit aims at making 2D game development easier as the tools are primarily geared towards 3D development.  In Havok’s own words, 2D Toolkit is:

 

Although you can make 2D games with Project Anarchy out-of-the-box, it takes some custom code to get spritesheets working and doing basic 2D collision through LUA. This sample 2D toolset gives you a couple new entities and utilities to create 2D games entirely through LUA without having to write any custom C++ code. There is still a lot of work left before this can be considered polished, but this is the first step and we wanted to give you, the community, a first look at this so that you can provide us with feedback early on.

Features

  • Adds two new entities: Sprite and 2D Camera
  • Automated sprite generation using Shoebox
  • Runtime playback of spritesheets
  • Collision detection and LUA callbacks

 

The toolkit also ships with 3 samples, Shooter, Impossible and Physics.  This is very much a work in progress, with not all features yet available across all supported platforms.  The toolkit is open source and hosted on Github.

News


9. January 2014

 

Right off the hop for my new game I have to deal with a problem that many games have to deal with but one that could have a huge impact on how my game is structured, so I am going to address it with a proof of concept right up front.   That thing, as the title probably spoils, is character customization.

 

As I said earlier, this is a pretty common problem in games and something you see questions about quite often if you hang around game development forums for any period of time.  So what do I mean with character customization?  Ever played Diablo or a similar title?  When you change your gear in Diablo, your character is updated on screen.  This behaviour is pretty much expected in this day and age… personally I really hate when I am playing an RPG and the weapon I equip has no affect on my avatar.  That said, it is certainly a non-trivial problem and one that could have a pretty big impact on your game.

 

Let’s start by taking a look at the various options… at least the ones I can think of.

 

2D - Pre-render all possible configurations

Of course, brute forcing it is always a possibility and in a few cases may actually be ideal.  Basically for a 2D game, you simply create every combination of sprite for every combination of equipment.  So if you have an animation sequence for “walk_left” and “walk_right”, you would have to create a version for each possible equipable item.  So you might have “walk_left_with_axe” and “walk_left_with_sword” sprite sheets for the different weapon combinations.  If you are rendering your spritesheets automatically from a 3D application, this could actually be the easiest approach.  Of course there are some seriously massive downsides to this approach.  Memory usage quickly becomes outrageous.  For each piece of equipment you double the size of your original sprite sheet for example.  Once you start adding more and more options, this approach quickly becomes unfeasible.  If you only have a very small amount of customization ( two or three weapon options ), this approach may appeal due to it’s simplicity.  With anything more than a couple combinations though, this approach is pretty much useless.

 

2D – Pre-rendered spritesheet + pre-rendered weapon animation frames.

You can use a regular sprite sheet and simply keep the weapons separate.  So basically you have two separate sprite sheets, one of the character without weapons, one with just the weapons.  Then at run-time you combine them.   Consider these two sprite sheets ( taken from a StackOverflow question ).

Animated Character:

Player SpriteSheet

Weapon:

Weapon SpriteSheet

Then at run-time you combine the two sprites together depending on what is equipped.

In fact, if your weapon isn’t animated ( as in, it doesn’t move, like a sword, as opposed to a weapon like a whip or nunchucks ) you can get by using a hard point system, where you define the mount position of the weapon ( for example, the players right hand ) and orientation.  Basically for each frame of animation, you provide an x and y coordinate as well as rotation amount that represents where and how the weapon should be mounted.  Then at runtime you combine the two sprites, positioning and rotating the weapon sprite relative to the mount/hardpoint. This way, you only need a single image per equipable weapon.

This approach has a couple downsides.  First, you need a separate data file to hold mount point data and possibly a tool to generate it.  Second, it makes bounding/hit detection more difficult as the mounted weapon may exceed the dimensions of the originating sprite.  Third, you need to deal with z-depth issues, you cant simply paste the weapon on top of the sprite, as what happens when the character is walking right but carrying the weapon in the left hand?  To handle this your workflow needs to basically be: render obscured weapons, render character sprite, render foreground weapons.  Finally, weapons with their own animation, like a whip snapping, would require even more effort.

All told, this is probably the easiest while still flexible system for 2D animation.  It requires a bit more work than simply brute forcing your sprite sheets, but the payoffs in reduced size and complexity are well worth it, especially if you have more than a couple items.  One other big win about this approach is then weapons can be re-used across different sprites, a huge win!  Then you can model a single sword and it will be available to all sprite sheets with hard/mount point information available.

 

3D - Bone/hardpoint system

If your game is pure 3D this is often the way you want to go.  If you character is a knight for example, in your 3D modelling application you create named hard points in your model, either as bones or as an object of some form such as a null object, point or invisible box.  Then at runtime you bind other models to those locations.  So for example, if you equip a long sword and shield, your code searches for the “right_arm_shield” bone and uses it as the parent object for a shield 3D model.  Then you bind a sword to say… the “left_arm_hand” bone.  You can see an example of this behaviour in this UDK example.

This approach is basically identical to the last 2D method mentioned except in 3D and that you dont generally need to define your own hardpoint system, as most 3D packages and engines already support it out of the box, either by binding to a bone or by accessing and parenting 3D objects to named locations.  Like working in 2D, then your weapon and character are two separate entities, like so:

3P_Weapon_Attach.JPG

One of the major advantages of 3D over 2D is flexibility.  Most animations in 2D need to be pre-calculated, while in 3D you have a great deal more flexibility with the computer filling in the blanks for you.  Unlike in 2D, additional 3D animations add very little to the data file sizes.

 

2D Bone based animation

One of the major downsides to 2D animation is, as you add more animations, it takes up more space.  People have been working on solutions, one of the most common of which is 2D bone animation.  How exactly can you use bones on a 2D image file you ask?  That’s a good question!  In 3D, its pretty simple… the motion/influence of the bone transforms the attached geometry.  In 2D, there is no geometry, so how can you do this?

Well, the simple version is, you cut your bitmap or vector image up into body parts.  So instead of a single image of an animated character, you instead of an image for the foot, lower leg, upper leg, torso, waist, shoulder, left upper arm, etc…  Then you can draw 2D bones that control the animation of this hierarchy of images.  There are applications/libraries that support this, making it mostly transparent for you.  Here is one such example from Spine from Esoteric Software.

 

The character:

Creating Bones

 

It looks like a single image ( and you can see a couple bones being drawn in the foot/leg area ), but it is actually composed of a few dozen source images:

Set Parent

 

Basically the workflow goes, you create your traditional image in your regular art program, except instead of creating a single image, you create dozens of hopefully seamless ones representing each body part.  You stitch them together, setting the drawing order ( what’s covering what ), then define bones to create animations.  This has the double advantage of massively decreasing space requirements while simplifying the animation process.  The downsides are creating your initial image are a great deal more difficult and of course, you need to buy ( or roll your own ) bone system, such as Spine. 

 

Spine is not the only option, there is also Spriter from BrashSoftware and probably others I am unaware of.  Both of these are commercial packages, but then, both are dirt cheap ( IMHO ) with price tags between $25 and $250 dollars, with $60 and $25 dollars being the norm with each package respectively.

The nice thing about taking this segmented bone approach… you basically get the ability to swap components or bind weapons for free.

 

2.5D hybrid approach

2 1/2D is becoming more and more popular these days.  Essentially a 2.5D game is a 3D game with 2D assets ( such as 2D sprites over a 3D background ), but more commonly its 3D sprites over a 2D background.  How does this work?

Well there are two basic approaches.  You model, texture, animate your 3D model as per normal.  Any character customization you perform you would do as if you were working in full 3D.  Now is where the two different approaches vary, depending on performance, the hardware you are running on, etc…  You can either generate a sprite sheet dynamically.  Essentially when your character is loaded or you change equipment, in the background your game renders a sprite sheet of your updated character containing every animation frame, then you use this sheet as if it was a normal 2D sprite sheet.  The other option is, you render your character each time it moves.  Creating a spritesheet consumes more memory, but very little GPU or CPU demand.  Generating your characters current animation frame dynamically on the other hand, takes almost no memory, but at a much higher CPU/GPU burden.  Which is right for your game is obviously up to you.

Of course the 2.5D approach certainly has it’s share of problems too.  First off, it is the most complicated of the bunch.  Second, you cant do any “by hand” calculations on your generated sprite/spritesheets, as they are now being rendered dynamically.  This means no ability to create pre-calculated bounding volumes for example.  On the other hand, the 2.5D approach gives you most of the advantages of 3D ( compact animation data structure, easy programmable modification, ability to alter texture mapping, lighting effects, etc… ) that 3D does, but with the simplicity of dealing with a traditional 2D world.

One animation to rule them all!

You may be thinking WAHOO… I can cut my work way down now!  All I need to do is animate an attack sequence and then I can simply substitute the different weapon images.  Not so fast I am sorry to say.  Think about it this way…  are the images for swinging an AXE and a Sword the same?  What about a pole arm?  You are still going to have several different animations to support different classes of weapons, but these techniques will still vastly reduce the amount of data and work required while allowing a ton of customization.

 

 

So then, what am I going to do?

So, which approach am I taking?  Ultimately I am going to require a great deal of customization, going far beyond the examples described above.  One such example scenario is a Mech with a customizable left weapon… this weapon could be an arm holding a gun, a turret containing one or more guns, or possibly even a large pack of rockets.  Each “weapon” could have it’s own animation set.  Oh, and I am much more proficient in 3D than 2D.  On the other hand, the game itself is a 2D game and I would prefer to keep it that way for the simplicity that brings in a number of areas ( level creation tools, path-finding and other AI tasks, physics, cheating at art, etc… ).  As a result, I am going to try the 2.5D approach, where the mech is created and animated in 3D, dynamically equipped, then rendered to 2D.  I am going to try both pre-rendering sprite sheets and dynamically rendering a single frame and see what the overhead of each is.  I actually have the compounded issue of dealing with enemy characters that are also dynamically generated, so any impact will be magnified many time over!

If I can’t get adequate performance from each approach, I may be forced to go full 3D and emulating a 2D look.  As I said, it’s one of those things that can have a major impact on your game!  That said, I am pretty confident performance wont be too much of a factor.  My immediate take away task list is:

  • get a LibGDX project set up that can load and render an textured, animated 3D model exported from Blender. I am pretty new to 3D in LibGDX ( okay… completely new to it )
  • bind another model to a bone on this model ( aka, a sword to a hand bone )
  • try to dynamically generate a sprite sheet of the character’s complete animation.  Measure resulting memory usage and time required to generate sheet
  • re-architect the code to instead generate each frame of animation per frame on demand.
  • scale both scenarios up to include multiple instances and see results on performance
  • measure performance impact of each approach on both desktop and mobile.

 

The outcome of this experiment will ultimately decide what approach I take.  Stay tuned if that process sounds interesting!

 

Oh, and of course, these are only the scenarios I could come up with for dealing with dynamically equipped characters, if you have another idea, please share it!

Design Programming


1. October 2013

 

This game engine came to my attention yesterday on reddit so I took a bit of time to check it out and it has potential.

 

First off, it also has a few problems.  The installer has absolutely no feedback to say that it installed, no UI, nothing.  It left me scratching my head about what was going wrong and in the end everything was working fine behind the scenes.  It all ended up being moot though, as I never got the evaluation key I applied for anyway, so I was never able to run the local executables anyway.  This is all those teething growing pain issues and no doubt will be resolved in time.  My money says my email provider simply rejected their email… it happens.  Fortunately we can still get a pretty good look as the IDE runs in the browser as well and their is a demo on their site.

 

Speaking of which, here it is:

image

 

You can run the editor in demo mode by clicking here.  Unfortunately you can’t actually create projects this way, but it will give you an idea what SpellJS can do.  Otherwise you need a key.  There are a couple options here, free, not free and slightly more expensive.  Or more specifically:

 

image

 

 

So basically its free for non-commercial use, or 99 euro per developer if you want to make money selling your game directly.  If on the other hand you want to make money via advertising and/or make use of the analytic or cloud features it’s 239 euro per developer.  All told, fairly reasonable pricing in my opinion, but they will face the same trouble all other engine companies face…  a lot of their competition is free.

 

Then there is the matter of what platforms are supported:

image

 

That’s most of the major platforms covered with Flash support as a fallback for the non-compliant browsers out there.  The HTML5 layer is built over WebGL with fallback to Canvas/CSS3 rendering for non-compliant browsers.  The Android and iOS publishing is as native applications, not as web apps by the way, which is good as the iOS browser performance is often abysmal while Android is a mixed bag.

 

Ok, back to SpellJS…  the layout is pretty straight forward.

 

Games are laid out in terms of Scenes.  On the left hand side you’ve got the scene graph:

image

 

Right click on a scene and select Render Scene and it appears in the Scene view:

image

 

Here you can this scene in your game.  There are buttons across the top for pausing the scene and switching in and out of development mode.  Unfortunately they didn’t seem to work for me, I am not sure if this is a side effect of the editor.  SpellJS is supposed to support live editing, allowing you to change your game as you play it.

 

Across the right hand side is a context sensitive area depending on what you have otherwise selected.  This for example is what happens if you select the physics component in Update:

image

 

While if you have an entity selected, such as the HighScore, you will see:

image

 

This is where you would configure your various entities by setting the properties of attached components, or by adding new components.  Much like Unity, you can attach a series of components to your entities and multiple entities to your scene.

 

So, where the heck does code go?  That is the realm of scripts.  On the left hand panel, select Library:

image

 

And you will see the various assets that make up your game, such as graphics sounds and… scripts.

image

 

Here for example is the script showFinish showing in the in IDE editor:

image

 

The editor supports code folding, syntax highlighting but unfortunately doesn't seem to support code completion.

 

Scripts arent your only option, if you look the various systems that compose the update group, if you right click demo_asteroids.system.shoot and select show for example, the code that composes the system will be shown in the editor.

image

 

image

 

On other big question is… how is the documentation?  Quite good actually.  There is a Getting Started guide, a half a dozen tutorials and a fairly comprehensive, if a little sparse, reference guide.  One annoyance is, each click opens in a new tab, leading to tons of tabs to be closed.

 

image

 

All told, this looks like a very polished product and if you like working in JavaScript is certainly worth checking out.  Should my product key ever arrive and my calendar opens up a bit, I will take a closer look.  You can check them out at SpellJs.com.

Programming


GFS On YouTube

See More Tutorials on DevGa.me!

Month List