14. February 2014

This is one of those things I’ve been fighting with for the past few days so I thought I would share a bit.

First a bit of a primer for those of you that aren’t all that familiar with bones yet.  Bones are a common way of animating 3D geometry.  Essentially you add an armature (skeleton) to your scene and bind it to the geometry of your mesh.  Moving the bones the compose your armature will then update the bound geometry.

Let’s take a look at an example in Blender.  Here is the anatomy of skeletal animation in Blender:

Each bone in turn has a “weight”.  This is the amount of influence the bone’s movements have on the geometry.  First you need to parent the mesh to the armature.  Simply select the mesh, then shift select the bones and press Ctrl+P.

Each bone then has a certain weight attached to it.  The colour determines how much influence a bone has over the geometry.  Consider the following, it’s the weight mapping for the top most bone in that example:

So now, if in pose mode, I rotate that bone, we see:

As you can see, moving the bone changes the surrounding geometry based on the bones influence area.  In a nutshell, that is how bone animation works in Blender.  I cover more of the “how to” in the Blender to LibGDX tutorial if you want details.

So that’s bones in Blender, lets take a look at the LibGDX side of the equation.  Here is how the skeleton is represented in a g3dj file:

{"id": "Armature",

"rotation": [-0.707107,  0.000000,  0.000000,  0.707107],

"scale": [ 1.000000,  1.000000,  1.000000],

"translation": [ 0.012381, -0.935900, -0.017023],

"children": [

{"id": "Bone",

"rotation": [ 0.500000, -0.500000,  0.500000,  0.500000],

"scale": [ 1.000000,  1.000000,  1.000000],

"children": [

{"id": "Bone_001",

"rotation": [ 0.000000,  0.009739, -0.000000,  0.999953],

"scale": [ 1.000000,  1.000000,  1.000000],

"translation": [ 1.000000,  0.000000,  0.000000],

"children": [

{"id": "Bone_002",

"rotation": [-0.000000, -0.013871,  0.000000,  0.999904],

"scale": [ 1.000000,  1.000000,  1.000000],

"translation": [ 1.575528,  0.000000,  0.000000]}

]}

]}

]}

]

You can also see the bones as part of the geometry node as well, like so:

{"id": "Cube",
"translation": [-0.012381, -0.017023, 0.935900],
"parts": [
{"meshpartid": "shape1_part1",
"materialid": "Material",
"bones": [
{"node": "Bone",
"translation": [ 0.012381, 0.017023, -0.935900, 0.000000],
"rotation": [ 0.500000, -0.500000, 0.500000, 0.500000],
"scale": [ 1.000000, 1.000000, 1.000000, 0.000000]},

{"node": "Bone_002",
"translation": [ 0.012381, 0.047709, 1.639329, 0.000000],
"rotation": [ 0.502062, -0.502062, 0.497930, 0.497930],
"scale": [ 1.000000, 1.000000, 1.000000, 0.000000]},

{"node": "Bone_001",
"translation": [ 0.012381, 0.017023, 0.064100, 0.000000],
"rotation": [ 0.495107, -0.495107, 0.504846, 0.504846],
"scale": [ 1.000000, 1.000000, 1.000000, 0.000000]}
],
"uvMapping": [[]]}
]},

The later are the bones that are contained in the mesh “Cube”.  This will be relevant in a minute.  Instead lets look at the Armature composition.

Each bone within the hierarchy is basically just a series of transforms relative to its parent.  The armature itself has a rotation, scale and translation, as do each child.  In your ModelInstance, the Armature is a hierarchy of Nodes, like so:

Animations then are simply a series of transforms applied to bones over a period of time, like so:

These values correspond with the keyframe values you set in Blender.

Now there are a couple gotchas to be aware of!

First off, in LibGDX a bone is probably more accurately called a joint.  Remember what a bone looked like in Blender:

Only the “bone head” is used.  The tail effectively doesn’t exist.

So, positioning relative to a bone will bring you to the base, not the tail.  Therefore, if you want to say… use bones for positioning other limbs, you need to create an extra one, and this lead to a problem.

Say I want to create a bone then that I can search for in code to mount a weapon upon.  I would then have to do something like this:

This allows me to locate the very tip of my geometry.  But there is a catch.  If I export it, I can see the new bone Bone_003 is part of my armature:

That said, remember the entry for “Cube” showed the bones it contains… yeah well, that’s a problem.

See… the new bone isn’t actually contained within the geometry.

As a direct result, when working with it in code in LIbGDX, it just doesn’t work.  It never returns the proper position, or at least the position I would expect.  I’ve also had some weird behaviour where an exported model with only a single bone can’t be programmatically updated as well.  I need to investigate this further.

As a result, I’ve decided that bones simply aren’t the way to go about it.  Instead what i’ve started doing is putting a null objet in where I want weapon mounts to appear.  It doesn’t seem to have the gotchas that bones have so far.

Sorry for the slow rate of updates, I am sick as a dog right now.  So if that post seemed a little incoherent, that’s why! :)

5. February 2014

I literally spent hours on this and it didn’t work.  So I decided to strip it down to absolute basics, create a barebones solution and figure out exactly what is going wrong.

The kicker is, the answer is nothing, it works exactly as expected.  Want to manipulate a bone in a Model in LibGDX and see the results propagated?  Well, this is how.

First I modelled the following in Blender:

Its a simple mesh with a single animation attached.  If you read my prior tutorials, the how of it will be no problem.

Then I ran it with this code:

package com.gamefromscratch;

public class Boned implements ApplicationListener, InputProcessor {

private PerspectiveCamera camera;

private ModelBatch modelBatch;

private Model blobModel;

private ModelInstance blobModelInstance;

private Node rootBone;

private Environment environment;

private AnimationController animationController;

@Override

public void create() {

camera = new PerspectiveCamera(

75,

Gdx.graphics.getWidth(),

Gdx.graphics.getHeight());

camera.position.set(0f,3f,5f);

camera.lookAt(0f,3f,0f);

camera.near = 0.1f;

camera.far = 300.0f;

modelBatch = new ModelBatch();

blobModelInstance = new ModelInstance(blobModel);

animationController = new AnimationController(blobModelInstance);

animationController.animate("Bend",-1,1f,null,0f);

rootBone = blobModelInstance.getNode("Bone");

environment = new Environment();

environment.set(new ColorAttribute(ColorAttribute.AmbientLight, 0.8f, 0.8f, 0.8f, 1.0f));

Gdx.input.setInputProcessor(this);

}

@Override

public void dispose() {

modelBatch.dispose();

blobModel.dispose();

}

@Override

public void render() {

// You've seen all this before, just be sure to clear the GL_DEPTH_BUFFER_BIT when working in 3D

Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());

Gdx.gl.glClearColor(1, 1, 1, 1);

Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);

camera.update();

animationController.update(Gdx.graphics.getDeltaTime());

modelBatch.begin(camera);

modelBatch.render(blobModelInstance, environment);

modelBatch.end();

}

@Override

public void resize(int width, int height) {

}

@Override

public void pause() {

}

@Override

public void resume() {

}

@Override

public boolean keyDown(int keycode) {

if(keycode == Input.Keys.LEFT)

{

returntrue;

}

else if(keycode == Input.Keys.RIGHT){

returntrue;

}

returnfalse;

}

@Override

public boolean keyUp(int keycode) {

// TODO Auto-generated method stub

returnfalse;

}

@Override

public boolean keyTyped(char character) {

returnfalse;

}

@Override

public boolean touchDown(int screenX, int screenY, int pointer, int button) {

// TODO Auto-generated method stub

returnfalse;

}

@Override

public boolean touchUp(int screenX, int screenY, int pointer, int button) {

// TODO Auto-generated method stub

returnfalse;

}

@Override

public boolean touchDragged(int screenX, int screenY, int pointer) {

// TODO Auto-generated method stub

returnfalse;

}

@Override

public boolean mouseMoved(int screenX, int screenY) {

// TODO Auto-generated method stub

returnfalse;

}

@Override

public boolean scrolled(int amount) {

// TODO Auto-generated method stub

returnfalse;

}

}

End result, you get this:

Press the arrow keys and the root bone is translated exactly as you would expect!

Now, I spent HOURS trying to do this, and for the life of me I couldn’t figure out why the heck it doesn’t work.  Sometimes going back to the basics gives you a clue.

In my test I used two models, one an animated bending arm, somewhat like the above.  The other was an axe with a single bone for “attaching”.  The exactly same code above failed to work.  Somethings up here...

So after I get the above working fine, I have an idea… is it the animation?  So I comment out this line:

animationController.animate("Bend",-1,1f,null,0f);

BOOM!  No longer works.

So it seems changes you make to the bones controlling a Model only propagate if there is an animation playing.  A hackable workaround seems to be to export an empty animation, but there has to be a better way.  So at least I know why I wasted several hours on something that should have just worked.  Now I am going to dig into the code for animate() and see if there is a call I can make manually without requiring an attached animation.

EDIT:

Got it!

Gotta admit it took a bit of digging, but I figured out what I am missing.  Each time you make a change to the bones you need to call calculateTransforms() on the ModelInstance that owns the bone!  Change the code like so:

public boolean keyDown(int keycode) {

if(keycode == Input.Keys.LEFT)

{

blobModelInstance.calculateTransforms();

return true;

}

else if(keycode == Input.Keys.RIGHT){

blobModelInstance.calculateTransforms();

return  true;

}

return false;

}

And presto, it works!

Just a warning, calculateTransforms() doesn’t appear to be light weight, so use with caution.

If you are curious where in the process calculateTransforms is called when you call animate(), it’s the end() call in BaseAnimationController.java called from the method applyAnimations().

4. February 2014

In my head this part of the process of creating a dynamically equipped character in 3D and rendering it to 2D was going to be a breeze.  Conceptually it went something like this...

• Create a named bone in parent object where the weapon could attach.
• Create a named bone in the child object where the weapon starts.
• Find bone one, attach bone two.
• Go enjoy a beer in smug satisfaction.

Sadly it doesn’t quite work like that.

There are a couple problems here.

1- LibGDX doesn’t have a SceneGraph.  A Scene Graph is basically a data structure that holds relationships in the world.  Essentially this is what scene2D actually is.  Well, LibGDX sorta has a SceneGraph in the form of Model, which is a graph of nodes that compose the model.  There however isn’t a layer above this for creating relationships between models.  This sadly means I can’t (out of the box) do something like myTank.attachObjectAt(turrent,(3,3,3));

2- Modifying bones in LibGDX doesn’t actually seem to do anything.  My first though was… well this is easy enough then, Ill just set the GlobalTransform of the child bone to the location of the parent bone.  This however doesn’t do anything.  I might be missing something here, in fact I probably am.  But I couldn’t get changes to a bone to propagate to the attached model.  This one is likely user error, but after playing around with it for a few days, I am still no further ahead.

3- Bones don’t have direction.  Well, that’s not accurate, bones don’t point in a direction.  Here for example is a sample hierarchy of bones exported in text readable format:

{"id": "Armature",
"rotation": [-0.497507, 0.502480, -0.502480, 0.497507],
"scale": [ 1.000000, 1.000000, 1.000000],
"translation": [ 0.066099, 0.006002, -0.045762],

"children": [
{"id": "Bone_Torso",
"rotation": [-0.498947, 0.501051, -0.501042, -0.498955],
"scale": [ 1.000000, 1.000000, 1.000000],

"children": [
{ "id": "Bone_UpArm",
"rotation": [ 0.007421, -0.000071, -0.009516, 0.999927],
"scale": [ 1.000000, 1.000000, 1.000000],
"translation": [ 1.728194, 0.000000, -0.000000],

"children": [
{"id": "Bone_LowArm",
"rotation": [ 0.999846, 0.012394, -0.000030, 0.012397],
"scale": [ 1.000000, 1.000000, 1.000000],
"translation": [ 1.663039, 0.000000, 0.000000],

"children": [
{"id": "Bone_Hand",
"rotation": [ 0.999737, -0.016222, -0.000425, 0.016225],
"scale": [ 1.000000, 1.000000, 1.000000],
"translation": [ 1.676268, 0.000000, -0.000000]}
]}
]}
]}

As you can see, you have the Armature, which is basically the root of the skeleton.  It’s translation is the start location of the bones in local space ( of the model ).  Each bone within is therefore offset from the bone above it in the hierarchy.  What is lacking ( for me ), is a direction.  As an aside, I can actually confer the direction using the distance between (in this example) Bone_Hand and Bone_LowArm.  This of course requires two bones on both the parent and child models.

You have a tank and a turret like so, with the red dots representing bone positions:

So long as your tank and turret are modelled with consistent coordinate systems ( aka, both same scale, same facings, etc ), you can simply locate the tank bone and offset the turret relative, like so:

In this simple example, it works perfectly!

Let’s consider a slightly more advanced situation however. Lets say we are diehard Warhammer fans and want to add sponsons to the side of our tank.

Once again, by modelling consistently with the tank and sponson, we can get:

Yay future tank now looks a hell of a lot like the world’s first tank…  but this is pretty much what we want.  Now lets mount one to the other side of the tank.

Ah shi… that’s not right.  Our second gun is INSIDE the tank.

The problem here is, we can’t depend on the coordinates of the both models matching up anymore.  We need to figure out direction when mounting the new weapon to the tank.  This means we need a facing direction for the bone, and this is something we don’t have.

Again, this problem has proven trickier then expected.  I will share some of my failed approaches shortly as this post got pretty long.

21. January 2014

Back in this post I discussed ways of making dynamically equipped 2D sprites.  One way was to render out a 3D object to 2D textures dynamically.  So far we have looked at working in 3D in LibGDX, then exporting and rendering an animated model from Blender, the next step would be a dynamic 3D object to texture.  At first glance I thought this would be difficult, but reality is, it was stupidly simple.  In fact, the very first bit of code I wrote simply worked!  Well, except the results being upside down I suppose… details details…

Anyways, that is what this post discusses.  Taking a 3D scene in LibGDX and rendering in a 2D texture.  The code is just a modification of the code from the Blender to GDX post from a couple days back.

```package com.gamefromscratch;

public class ModelTest implements ApplicationListener, InputProcessor {
private PerspectiveCamera camera;
private ModelBatch modelBatch;
private Model model;
private ModelInstance modelInstance;
private Environment environment;
private AnimationController controller;
private boolean screenShot = false;
private FrameBuffer frameBuffer;
private Texture texture = null;
private TextureRegion textureRegion;
private SpriteBatch spriteBatch;

@Override
public void create() {
// Create camera sized to screens width/height with Field of View of 75 degrees
camera = new PerspectiveCamera(
75,
Gdx.graphics.getWidth(),
Gdx.graphics.getHeight());

// Move the camera 5 units back along the z-axis and look at the origin
camera.position.set(0f,0f,7f);
camera.lookAt(0f,0f,0f);

// Near and Far (plane) represent the minimum and maximum ranges of the camera in, um, units
camera.near = 0.1f;
camera.far = 300.0f;

// A ModelBatch is like a SpriteBatch, just for models.  Use it to batch up geometry for OpenGL
modelBatch = new ModelBatch();

// Now load the model by name
// Note, the model (g3db file ) and textures need to be added to the assets folder of the Android proj
// Now create an instance.  Instance holds the positioning data, etc of an instance of your model
modelInstance = new ModelInstance(model);

//move the model down a bit on the screen ( in a z-up world, down is -z ).
modelInstance.transform.translate(0, 0, -2);

// Finally we want some light, or we wont see our color.  The environment gets passed in during
// the rendering process.  Create one, then create an Ambient ( non-positioned, non-directional ) light.
environment = new Environment();
environment.set(new ColorAttribute(ColorAttribute.AmbientLight, 0.8f, 0.8f, 0.8f, 1.0f));

// You use an AnimationController to um, control animations.  Each control is tied to the model instance
controller = new AnimationController(modelInstance);
// Pick the current animation by name
controller.setAnimation("Bend",1, new AnimationListener(){

@Override
public void onEnd(AnimationDesc animation) {
// this will be called when the current animation is done.
// queue up another animation called "balloon".
// Passing a negative to loop count loops forever.  1f for speed is normal speed.
controller.queue("Balloon",-1,1f,null,0f);
}

@Override
public void onLoop(AnimationDesc animation) {
// TODO Auto-generated method stub

}

});

frameBuffer = new FrameBuffer(Format.RGB888,Gdx.graphics.getWidth(),Gdx.graphics.getHeight(),false);
Gdx.input.setInputProcessor(this);

spriteBatch = new SpriteBatch();
}

@Override
public void dispose() {
modelBatch.dispose();
model.dispose();
}

@Override
public void render() {
// You've seen all this before, just be sure to clear the GL_DEPTH_BUFFER_BIT when working in 3D
Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
Gdx.gl.glClearColor(1, 1, 1, 1);
Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);

// When you change the camera details, you need to call update();
// Also note, you need to call update() at least once.
camera.update();

// You need to call update on the animation controller so it will advance the animation.  Pass in frame delta
controller.update(Gdx.graphics.getDeltaTime());

// If the user requested a screenshot, we need to call begin on our framebuffer
// This redirects output to the framebuffer instead of the screen.
if(screenShot)
frameBuffer.begin();

// Like spriteBatch, just with models!  pass in the box Instance and the environment
modelBatch.begin(camera);
modelBatch.render(modelInstance, environment);
modelBatch.end();

// Now tell OpenGL that we are done sending graphics to the framebuffer
if(screenShot)
{
frameBuffer.end();
// get the graphics rendered to the framebuffer as a texture
texture = frameBuffer.getColorBufferTexture();
// welcome to the wonderful world of different coordinate systems!
// simply put, the framebuffer is upside down to normal textures, so we have to flip it
// Use a TextureRegion to do so
textureRegion = new TextureRegion(texture);
// and.... FLIP!  V (vertical) only
textureRegion.flip(false, true);
}

// In the case that we have a texture object to actually draw, we do so
// using the old familiar SpriteBatch to do so.
if(texture != null)
{
spriteBatch.begin();
spriteBatch.draw(textureRegion,0,0);
spriteBatch.end();
screenShot = false;
}
}

@Override
public void resize(int width, int height) {
}

@Override
public void pause() {
}

@Override
public void resume() {
}

@Override
public boolean keyDown(int keycode) {
// TODO Auto-generated method stub
return false;
}

@Override
public boolean keyUp(int keycode) {
// TODO Auto-generated method stub
return false;
}

@Override
public boolean keyTyped(char character) {
// If the user hits a key, take a screen shot.
this.screenShot = true;
return false;
}

@Override
public boolean touchDown(int screenX, int screenY, int pointer, int button) {
// TODO Auto-generated method stub
return false;
}

@Override
public boolean touchUp(int screenX, int screenY, int pointer, int button) {
// TODO Auto-generated method stub
return false;
}

@Override
public boolean touchDragged(int screenX, int screenY, int pointer) {
// TODO Auto-generated method stub
return false;
}

@Override
public boolean mouseMoved(int screenX, int screenY) {
// TODO Auto-generated method stub
return false;
}

@Override
public boolean scrolled(int amount) {
// TODO Auto-generated method stub
return false;
}
}```

And… that’s it.

Run the code and you get the exact same results as the last example:

However, if you press any key, the current frame is saved to a texture, and that is instead displayed on screen.  Press a key again and it will update to the current frame:

Of course, the background colour is different, because we didn’t implicitly set one.  The above a is LibGDX Texture object, which can now be treated as a 2D sprite, used as a texture map, whatever.

The code is ultra simple.  We have a toggle variable screenShot, that gets set if a user hits a key.  The actual process of rendering to texture is done with the magic of FrameBuffersl  Think of a framebuffer as a place for OpenGL to render other than your video card.  So instead of drawing the graphics to the screen, it instead draws the graphics to a memory buffer.  We then get this memory buffer as a texture using getColorBufferTexture().  The only complication is the frame buffer is rendered upside down.  This is easily fixed by wrapping the Texture in a TextureRegion and flipping the V coordinate.  Finally we display our newly generated texture using our old friend, the SpriteBatch.

Gotta love it when something you expect to be difficult ends up being ultra easy.  Next I have to measure the performance, so see if this is something that can be done in a realtime situation, or do we need to do this onload/change?

19. January 2014

Let me start off by saying that exporting from Blender is always a pain in the ass.  This experience didn’t prove to be an exception.  I will describe the process in some detail.

First and foremost, Blender 2.69 DIDNT work.  At least not for me, not matter what I did, Blender 2.69 would not export texture information in FBX format.  This cost me many hours of my life.  Once I switched to 2.68 everything worked.  Your mileage may vary, but in my experience, 2.69 simply would not work.  Another thing, something I lost another couples hours to… you need to use GL2 in LibGDX!

A lot of this process takes place on the Blender end.  I obviously don’t go into depth in how to use Blender.  If you are completely new to Blender, I highly suggest you run through this tutorial, it will teach you pretty much everything you need to know to follow along.

## STEP 1: Model your … model

This step is pretty straight forward, you need to model an object to be exported.  I created this:

I took a standard cube, went in edit mode, extrude the face, select all and sub-divided it.

## STEP 2: Texture your model

Not complete sure why, but every model that is exported to FBX seems to require a texture.  Once again, YOU NEED A TEXTURE.

Next, FBX does NOT include the texture information.  This means you have to save your texture to an image file, and add that image to the assets folder of your project.

You need to UV Unwrap your object.  In Edit mode you can accomplish this by selecting all, then UV Unwrap->Smart UV Project.  In the UV/Image window, it should look like:

There are a couple limitations here.  You need to make sure you texture face enabled:

With your object selected, in materials, make sure Face Textures is selected.

Next be sure to select a texture:

Type is Image, then under the Image section, select your texture image.  This is the file that needs to be added to your assets folder.

Scroll down to Mapping, change Coordinates to UV, select your UVMap from the drop down, leave Project as flat:

At this point you can check if your texture is right by switching over to GLSL in the properties panel ( hit N key ) of the 3D View.

## STEP 3: Set up your rig

This part is a bit tricky if you’ve never done it before.  You can create a series of named animations that will be exported.  You animate by setting key frames.  First you need to setup up your skeleton.  Simply select Add->Armature->Single Bone.

Position the bone within your model.  Switch into edit mode and select the end “knob” of the bode and select Extrude ( E ) to create more bones.  Repeat until it looks like this:  ( FYI, you can use Z to make a model see through, handy when placing bones ).

Next you want to set the bones to manipulate the underlying mesh.

In Object mode, select your mesh, then shift click select the underlying bones.  Now parent them by hit CTRL + P and select “With automatic weight”

## STEP 4: Animate your model

Now you can set up your animations.  Since we want multiple animations in the same file we are doing this slightly differently.

First set up your duration.  In the timeline, set end to the last frame of your longest animation:

Bring up a dopesheet view, then select the Action Editor:

Click the + icon, or new button ( depending if you have any existing animations, your options will change )

Name the animation.

Now go to your first frame ( Frame 1 using the controls in timeline ).

Select your bone/armature and switch to POSE mode.  Press A to select all bones.

Create a keyframe by hitting I ( as in “eye” ) then in the resulting menu select LocRotScale ( as in, Location, Rotation and Scale ), which will create a keyframe tracking those three things.

Now advance to the next frame you want to create a keyframe upon, rotate, scale or move your bone, then select all bones again and create another key.

You can create multiple named animation using the same process.  SImply click the + Icon, name another animation, go back to frame 1, position it, set a keyframe, animate, setting key frames.

## STEP 5: Exporting to FBX

This part is pretty hit or miss.  You have a couple options, you can select just your mesh and armature, or simply export everything.

Then select File->Export->Autodesk FBX:

The documents say to stay with the default axis settings and the FBX converter will take care of the rest.  Frankly I could never get this to work.

These are the settings I am currently using:

STEP 6: Convert to 3dgb format and add to Eclipse

Open a command prompt where ever you exported the FBX.  Make sure your texture file is there as well.  If you haven’t already, make sure to download fbx-conv, extract that file to the directory you exported your FBX to.  Open a command prompt and CD to that directory.  Next type:

fbx-conv-win32 –f yourfilename.fbx

This should generate a g3db file.  Copy it and your textures to the assets/data directory of your android project:

## STEP 7: The code

This code demonstrates how to load a 3D model and play multiple animations.  There is no description, beyond the code comments.  If you have any questions the comments dont cover, ask them below!

```package com.gamefromscratch;

public class ModelTest implements ApplicationListener {
private PerspectiveCamera camera;
private ModelBatch modelBatch;
private Model model;
private ModelInstance modelInstance;
private Environment environment;
private AnimationController controller;

@Override
public void create() {
// Create camera sized to screens width/height with Field of View of 75 degrees
camera = new PerspectiveCamera(
75,
Gdx.graphics.getWidth(),
Gdx.graphics.getHeight());

// Move the camera 5 units back along the z-axis and look at the origin
camera.position.set(0f,0f,7f);
camera.lookAt(0f,0f,0f);

// Near and Far (plane) represent the minimum and maximum ranges of the camera in, um, units
camera.near = 0.1f;
camera.far = 300.0f;

// A ModelBatch is like a SpriteBatch, just for models.  Use it to batch up geometry for OpenGL
modelBatch = new ModelBatch();

// Now load the model by name
// Note, the model (g3db file ) and textures need to be added to the assets folder of the Android proj
// Now create an instance.  Instance holds the positioning data, etc of an instance of your model
modelInstance = new ModelInstance(model);

//fbx-conv is supposed to perform this rotation for you... it doesnt seem to
modelInstance.transform.rotate(1, 0, 0, -90);
//move the model down a bit on the screen ( in a z-up world, down is -z ).
modelInstance.transform.translate(0, 0, -2);

// Finally we want some light, or we wont see our color.  The environment gets passed in during
// the rendering process.  Create one, then create an Ambient ( non-positioned, non-directional ) light.
environment = new Environment();
environment.set(new ColorAttribute(ColorAttribute.AmbientLight, 0.8f, 0.8f, 0.8f, 1.0f));

// You use an AnimationController to um, control animations.  Each control is tied to the model instance
controller = new AnimationController(modelInstance);
// Pick the current animation by name
controller.setAnimation("Bend",1, new AnimationListener(){

@Override
public void onEnd(AnimationDesc animation) {
// this will be called when the current animation is done.
// queue up another animation called "balloon".
// Passing a negative to loop count loops forever.  1f for speed is normal speed.
controller.queue("balloon",-1,1f,null,0f);
}

@Override
public void onLoop(AnimationDesc animation) {
// TODO Auto-generated method stub

}

});
}

@Override
public void dispose() {
modelBatch.dispose();
model.dispose();
}

@Override
public void render() {
// You've seen all this before, just be sure to clear the GL_DEPTH_BUFFER_BIT when working in 3D
Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
Gdx.gl.glClearColor(1, 1, 1, 1);
Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);

// For some flavor, lets spin our camera around the Y axis by 1 degree each time render is called
//camera.rotateAround(Vector3.Zero, new Vector3(0,1,0),1f);
// When you change the camera details, you need to call update();
// Also note, you need to call update() at least once.
camera.update();

// You need to call update on the animation controller so it will advance the animation.  Pass in frame delta
controller.update(Gdx.graphics.getDeltaTime());
// Like spriteBatch, just with models!  pass in the box Instance and the environment
modelBatch.begin(camera);
modelBatch.render(modelInstance, environment);
modelBatch.end();
}

@Override
public void resize(int width, int height) {
}

@Override
public void pause() {
}

@Override
public void resume() {
}
}```

Now if you run it, you should see:

And that, is how you get a textured animated 3D model from Blender to LibGDX.

## VIDEO EDITION!

Ok, that might have been a bit confusing, so I decided to screen capture the entire process.  The following video shows creating, texturing, animating then exporting a model from Blender to LibGDX.  There is no instruction beyond what is above, but it might help you seeing the entire process, especially if something I said above didn’t make sense, or if your own export doesn’t work out.

The video on YouTube is much higher resolution than the embedded one below.

Let me know if you have any questions.