Subscribe to GameFromScratch on YouTube Support GameFromScratch on Patreon
19. January 2014

 

Let me start off by saying that exporting from Blender is always a pain in the ass.  This experience didn’t prove to be an exception.  I will describe the process in some detail.

 

First and foremost, Blender 2.69 DIDNT work.  At least not for me, not matter what I did, Blender 2.69 would not export texture information in FBX format.  This cost me many hours of my life.  Once I switched to 2.68 everything worked.  Your mileage may vary, but in my experience, 2.69 simply would not work.  Another thing, something I lost another couples hours to… you need to use GL2 in LibGDX!

 

A lot of this process takes place on the Blender end.  I obviously don’t go into depth in how to use Blender.  If you are completely new to Blender, I highly suggest you run through this tutorial, it will teach you pretty much everything you need to know to follow along.

 

STEP 1: Model your … model

 

This step is pretty straight forward, you need to model an object to be exported.  I created this:

image

I took a standard cube, went in edit mode, extrude the face, select all and sub-divided it.

 

STEP 2: Texture your model

 

Not complete sure why, but every model that is exported to FBX seems to require a texture.  Once again, YOU NEED A TEXTURE.

Next, FBX does NOT include the texture information.  This means you have to save your texture to an image file, and add that image to the assets folder of your project.

You need to UV Unwrap your object.  In Edit mode you can accomplish this by selecting all, then UV Unwrap->Smart UV Project.  In the UV/Image window, it should look like:

image

There are a couple limitations here.  You need to make sure you texture face enabled:

image

With your object selected, in materials, make sure Face Textures is selected.

 

Next be sure to select a texture:

image

Type is Image, then under the Image section, select your texture image.  This is the file that needs to be added to your assets folder.

Scroll down to Mapping, change Coordinates to UV, select your UVMap from the drop down, leave Project as flat:

image

 

At this point you can check if your texture is right by switching over to GLSL in the properties panel ( hit N key ) of the 3D View.

 

STEP 3: Set up your rig

 

This part is a bit tricky if you’ve never done it before.  You can create a series of named animations that will be exported.  You animate by setting key frames.  First you need to setup up your skeleton.  Simply select Add->Armature->Single Bone.

image

 

Position the bone within your model.  Switch into edit mode and select the end “knob” of the bode and select Extrude ( E ) to create more bones.  Repeat until it looks like this:  ( FYI, you can use Z to make a model see through, handy when placing bones ).

image

Next you want to set the bones to manipulate the underlying mesh.

In Object mode, select your mesh, then shift click select the underlying bones.  Now parent them by hit CTRL + P and select “With automatic weight”

image

 

STEP 4: Animate your model

 

Now you can set up your animations.  Since we want multiple animations in the same file we are doing this slightly differently.

First set up your duration.  In the timeline, set end to the last frame of your longest animation:

image

 

 

Bring up a dopesheet view, then select the Action Editor:

image

 

Click the + icon, or new button ( depending if you have any existing animations, your options will change )

image

Name the animation.

 

Now go to your first frame ( Frame 1 using the controls in timeline ).

Select your bone/armature and switch to POSE mode.  Press A to select all bones.

Create a keyframe by hitting I ( as in “eye” ) then in the resulting menu select LocRotScale ( as in, Location, Rotation and Scale ), which will create a keyframe tracking those three things.

image

 

Now advance to the next frame you want to create a keyframe upon, rotate, scale or move your bone, then select all bones again and create another key.

You can create multiple named animation using the same process.  SImply click the + Icon, name another animation, go back to frame 1, position it, set a keyframe, animate, setting key frames.

 

STEP 5: Exporting to FBX

 

This part is pretty hit or miss.  You have a couple options, you can select just your mesh and armature, or simply export everything.

Then select File->Export->Autodesk FBX:

image

 

The documents say to stay with the default axis settings and the FBX converter will take care of the rest.  Frankly I could never get this to work.

These are the settings I am currently using:

image

 

STEP 6: Convert to 3dgb format and add to Eclipse

Open a command prompt where ever you exported the FBX.  Make sure your texture file is there as well.  If you haven’t already, make sure to download fbx-conv, extract that file to the directory you exported your FBX to.  Open a command prompt and CD to that directory.  Next type:

fbx-conv-win32 –f yourfilename.fbx

image

 

This should generate a g3db file.  Copy it and your textures to the assets/data directory of your android project:

image

 

STEP 7: The code

 

This code demonstrates how to load a 3D model and play multiple animations.  There is no description, beyond the code comments.  If you have any questions the comments dont cover, ask them below!

 

package com.gamefromscratch;

import com.badlogic.gdx.ApplicationListener;
import com.badlogic.gdx.Files.FileType;
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.graphics.GL10;
import com.badlogic.gdx.graphics.PerspectiveCamera;
import com.badlogic.gdx.graphics.g3d.Environment;
import com.badlogic.gdx.graphics.g3d.Model;
import com.badlogic.gdx.graphics.g3d.ModelBatch;
import com.badlogic.gdx.graphics.g3d.ModelInstance;
import com.badlogic.gdx.graphics.g3d.attributes.ColorAttribute;
import com.badlogic.gdx.graphics.g3d.loader.G3dModelLoader;
import com.badlogic.gdx.utils.UBJsonReader;
import com.badlogic.gdx.graphics.g3d.utils.AnimationController;
import com.badlogic.gdx.graphics.g3d.utils.AnimationController.AnimationDesc;
import com.badlogic.gdx.graphics.g3d.utils.AnimationController.AnimationListener;



public class ModelTest implements ApplicationListener {
    private PerspectiveCamera camera;
    private ModelBatch modelBatch;
    private Model model;
    private ModelInstance modelInstance;
    private Environment environment;
    private AnimationController controller;
    
    @Override
    public void create() {        
        // Create camera sized to screens width/height with Field of View of 75 degrees
        camera = new PerspectiveCamera(
                75,
                Gdx.graphics.getWidth(),
                Gdx.graphics.getHeight());
        
        // Move the camera 5 units back along the z-axis and look at the origin
        camera.position.set(0f,0f,7f);
        camera.lookAt(0f,0f,0f);
        
        // Near and Far (plane) represent the minimum and maximum ranges of the camera in, um, units
        camera.near = 0.1f; 
        camera.far = 300.0f;

        // A ModelBatch is like a SpriteBatch, just for models.  Use it to batch up geometry for OpenGL
        modelBatch = new ModelBatch();
        
        // Model loader needs a binary json reader to decode
        UBJsonReader jsonReader = new UBJsonReader();
        // Create a model loader passing in our json reader
        G3dModelLoader modelLoader = new G3dModelLoader(jsonReader);
        // Now load the model by name
        // Note, the model (g3db file ) and textures need to be added to the assets folder of the Android proj
        model = modelLoader.loadModel(Gdx.files.getFileHandle("data/blob.g3db", FileType.Internal));
        // Now create an instance.  Instance holds the positioning data, etc of an instance of your model
        modelInstance = new ModelInstance(model);
        
        //fbx-conv is supposed to perform this rotation for you... it doesnt seem to
        modelInstance.transform.rotate(1, 0, 0, -90);
        //move the model down a bit on the screen ( in a z-up world, down is -z ).
        modelInstance.transform.translate(0, 0, -2);

        // Finally we want some light, or we wont see our color.  The environment gets passed in during
        // the rendering process.  Create one, then create an Ambient ( non-positioned, non-directional ) light.
        environment = new Environment();
        environment.set(new ColorAttribute(ColorAttribute.AmbientLight, 0.8f, 0.8f, 0.8f, 1.0f));
        
        // You use an AnimationController to um, control animations.  Each control is tied to the model instance
        controller = new AnimationController(modelInstance);  
        // Pick the current animation by name
        controller.setAnimation("Bend",1, new AnimationListener(){

            @Override
            public void onEnd(AnimationDesc animation) {
                // this will be called when the current animation is done. 
                // queue up another animation called "balloon". 
                // Passing a negative to loop count loops forever.  1f for speed is normal speed.
                controller.queue("balloon",-1,1f,null,0f);
            }

            @Override
            public void onLoop(AnimationDesc animation) {
                // TODO Auto-generated method stub
                
            }
            
        });
    }

    @Override
    public void dispose() {
        modelBatch.dispose();
        model.dispose();
    }

    @Override
    public void render() {
        // You've seen all this before, just be sure to clear the GL_DEPTH_BUFFER_BIT when working in 3D
        Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
        Gdx.gl.glClearColor(1, 1, 1, 1);
        Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
        
        // For some flavor, lets spin our camera around the Y axis by 1 degree each time render is called
        //camera.rotateAround(Vector3.Zero, new Vector3(0,1,0),1f);
        // When you change the camera details, you need to call update();
        // Also note, you need to call update() at least once.
        camera.update();
        
        // You need to call update on the animation controller so it will advance the animation.  Pass in frame delta
        controller.update(Gdx.graphics.getDeltaTime());
        // Like spriteBatch, just with models!  pass in the box Instance and the environment
        modelBatch.begin(camera);
        modelBatch.render(modelInstance, environment);
        modelBatch.end();
    }

    @Override
    public void resize(int width, int height) {
    }

    @Override
    public void pause() {
    }

    @Override
    public void resume() {
    }
}

 

Now if you run it, you should see:

blenderAnimation

 

And that, is how you get a textured animated 3D model from Blender to LibGDX.

 

VIDEO EDITION!

 

Ok, that might have been a bit confusing, so I decided to screen capture the entire process.  The following video shows creating, texturing, animating then exporting a model from Blender to LibGDX.  There is no instruction beyond what is above, but it might help you seeing the entire process, especially if something I said above didn’t make sense, or if your own export doesn’t work out.

 

The video on YouTube is much higher resolution than the embedded one below.

 

Let me know if you have any questions.

Programming Art


26. December 2013

 

Substance Designer is a visual texture creation tool which is currently 66% off on Steam, selling for 33.99$.  It’s a flash sale and only 2 1/2 hours to go, so you might want to act quick.

Substance 4 integrates with most 3D modelling applications including Max, Maya, Modo as well as engines such as Unity and UDK.  Of course you can generate a texture in SD4 and use it in any application.

Think it may be a late Christmas gift for myself.

Art News


1. November 2013

 

Blender announced the released of Blender 2.69 and now we are going to take a quick look at what is in it for game developers.

 

The biggest feature on that front is the ability to import FBX files, as well as export FBX and OBJ files with split normals.  As FBX support improves, it becomes easier and easier to slot Blender into a seamless multi application workflow.

 

The mesh bisect tool was added for quickly cutting an object in half:

File:Mesh bisect.png

 

Clean-up tool added for automatically detecting and fixing holes in a mesh.

Symmetrize was re-written and now preserves UV and mesh data.

Probably the biggest new feature was the addition of Hidden Wire display mode.  With this enabled, it will only show front facing wireframe:

File:View3d shading hidden-wire.png

A number of other small modeling changes.

 

Plane Tracking was added to the Motion Tracker, for replacing flat surface in a scene, such as a billboard.

File:Blender2.69 MotionTracker.png

 

As well, a number of improvements to the Cycles renderer.

 

All told, not a ton new in this update.

News Art


16. September 2013

 

Throughout the tutorial series I intentionally stayed with a low polygon count model.  There are two reasons for this.  First, if you are using it for a real-time game instead of creating a sprite, a lower polygon count is important for performance, especially in a mobile title.  Second, it’s actually fairly simple to add detail to a model, but it’s a heck of a lot harder to take it away!

 

In this post, we are going to quickly look at ways to add detail to a model in Blender.

 

Subdivision Surface Modifier

 

One option is to apply a Subdivision Surface Modifier to your mesh.  This is a well named modifier, as it does exactly what you would think… the mesh subdivides itself, over and over until you have a smoother mesh.  Each time you subdivide it becomes smoother and the mesh becomes denser.

 

To apply a subdivision surface modifier, add it like you did the Mirror modifier:

image

 

Then in the options:

image

 

The two subdivision controls then determine how many times it will be subdivided.  View is how many times the viewport mesh ( the one you work with ) will be subdivided, while Render is the number of iterations performed at render time.  The choice between Catmull-Clark or Simple determines which sub-division algorithm will be used.

 

Here for example, are the effects of View at various values:

 

View = 0

image

View = 1

image

View = 3

image

 

The mesh is smoothed a great deal with the simple click of a button.  However, as you can notice from the wireframe, you can still edit the source wireframe.  So you work in low resolution and see the results as they will be subdivided.  On the other hand, the polygon count is taking a kicking.

 

Here is the low polygon versions stats:

image

And here is the result after we apply a 3x sub division modifier:

image

 

… so that is why we work with a low polygon version! Smile  From 514 triangles to 66548!

 

One thing to keep in mind with the subdivision modifer, it works on the entire mesh, while sometimes you may just want to subdivide parts of a mesh… don’t worry, Blenders got you covered there too!

 

There is also the Multi-Resolution modifier, which performs almost identically, but allows you to sculpt with the generated vertices.  I have never personally used it, any time I’ve sculpted, I have simply applied the Subdivision modifier before sculpting.  I suppose the Multiresolution modifier is specifically to solve that problem.

 

Subdivide/Subdivide Smooth

 

When in Edit Mesh mode, you have two other options, subdivide and subdivide smooth.

 

Simply hit W to bring up the context menu and select either Subdivide or Subdivide smooth:

image

 

This modifier will be applied to whatever you have selected.  So for example if we wanted to add detail to the cockpit area, select it and choose either:

Subdivide

image

Subdivide Smooth

image

 

Unlike using a modifier, the effect is immediate and the base mesh itself is altered.  Of course, you can also pick and choose where you want the subdivision to occur.  You can of course run it multiple times.

 

Subdivide Smooth x 3:

image

 

What about sharp edges?

 

This is the first obvious question when it comes to smoothing via subdivision.  Partially, this is why Subdivide and SubDivide smooth both exist.  However, there are ways to maintain a sharp edge after you apply subdivision to it.

 

Consider the jet intake.  We want to to stay at a nearly 90 degree angle, but here it is with a SubDivision Modifier with View=2

image

 

That’s obviously not what we want.

 

Instead, we can select both edge loops around the intake, then hit CTRL + E to bring up the Edge edit menu, then select Mark Sharp, followed by Split Edge:

image

 

And now when you apply a subD modifier:

image

 

Your flat edges stay flat.

 

You can also achieve nearly the identical effect using edge loops before applying a subdivision.  Consider the same air intake, if you want to keep the front flat, you and add two new edge loops right beside the edge you want to stay sharp.  Like so:

image

 

And now when you subdivide, you get:

image

 

 

As you can see, it’s fairly simply to add detail to an existing mesh.

Art


15. September 2013

 

We come to the end of our time with Blender!  Its finally time to turn our 3D model into pixels.  We are going to render our animation out to a series of images with transparent backgrounds.

 

You have one final decision to make.  Perspective or Orthographic projection.  So, what exactly is the difference?  Well in a nutshell, something viewed with perspective changes in size as it changes in depth.  So the further away something gets, the smaller it will be.  This is essentially how we see the world, well, unless of course you have poor depth perception that is…  In Orthographic, things are drawn at the size they are.

 

Here for example is the same frame rendered each way:

Perspective:

PerspectiveProjection

Orthographic:

OrthographicProjection

 

Generally perspective is the defacto format you would use, but when working with sprites, orthographic makes a ton of sense.  If you decide to go with Orthographic, there are a few things you need to know.

 

As you may recall, you swap between different lens modes in the camera tab:

image

 

Once you switch to Orthographic, you can no longer zoom or pan the camera.  You instead tweak it using Orthographic Scale in place of zoom, and Shift X and Y in place of panning.

image

 

Now before you render you want to make sure your lighting is set and your camera is properly framed.  We obviously want to contain our entire sprite in every frame, so that means we need to make sure our sprite is entirely within the camera shot at it’s biggest point during rotation.  In this case, that would be frames 5 or 15 in our animation.  Play to one of those two frames, then make sure the jet is entirely within the camera’s shot:

image

 

Now bring up the Render panel in the Property window.  Of key importance are the resolution and animation frame start and end.

image

 

I want my final sprite to be 192x128 in size and I want to render all 20 frames in the animation.  Be careful of the percentage value, as it will be 50% initially ( which would result in your sprite being 96x64 instead ).  If you change the ratio between the X and Y resolution, be aware the camera target rectangle will change as well.

 

Since we want our background to not render, there are a few more settings we have to make.  Scroll down and locate Shading, and set Alpha to transparent.

image

 

 

Then scroll down to Output, select a path you want to save frames to ( yes, the results can end up looking a bit strange, like mine has here, don’t worry, it will still work ) and you want to make sure whatever graphic format you go with ( like PNG here ) supports an Alpha channel, so select RGBA.

image

 

And now we are finally ready to render!  Since we want to render all the frames in our animation, click Animation.

image

 

Then navigate to the output directory and…

image

 

Wahoo!  Our animated sprites are finally done.  Now if you were to use it in a game, at the current resolution it would look like:

jetAnimated

 

Congratulations, you just modelled, textured and rendered your first game sprite in Blender!  I hope you’ve enjoyed the series!

 

I will be covered a few more topics in appendix form but that concludes the Blender portions required to make the game sprite.

 

I was going to cover creating a sprite sheet, then I realized I already had… oops.  This tutorial shows how you can create a sprite sheet from a series of images using GIMP.  This tutorial as well as this one show how to use a sprite sheet in Haxe/NME.  This one shows how to use TexturePacker to assemble your many sprites into a single sheet, as well as how to use a sprite sheet in PlayStation Mobile.  Finally this tutorial shows how to use a sprite sheet in Cocos2D HTML5.

 

So I guess I’ve already got the whole sprite sheet process covered! Winking smile

Art


GFS On YouTube

See More Tutorials on DevGa.me!

Month List