Roadblock number one! Dynamically mounting weapons is proving harder than I expected.

4. February 2014

 

In my head this part of the process of creating a dynamically equipped character in 3D and rendering it to 2D was going to be a breeze.  Conceptually it went something like this...

 

  • Create a named bone in parent object where the weapon could attach.
  • Create a named bone in the child object where the weapon starts.
  • Find bone one, attach bone two.
  • Go enjoy a beer in smug satisfaction.

 

Sadly it doesn’t quite work like that.

 

There are a couple problems here.

1- LibGDX doesn’t have a SceneGraph.  A Scene Graph is basically a data structure that holds relationships in the world.  Essentially this is what scene2D actually is.  Well, LibGDX sorta has a SceneGraph in the form of Model, which is a graph of nodes that compose the model.  There however isn’t a layer above this for creating relationships between models.  This sadly means I can’t (out of the box) do something like myTank.attachObjectAt(turrent,(3,3,3));

 

2- Modifying bones in LibGDX doesn’t actually seem to do anything.  My first though was… well this is easy enough then, Ill just set the GlobalTransform of the child bone to the location of the parent bone.  This however doesn’t do anything.  I might be missing something here, in fact I probably am.  But I couldn’t get changes to a bone to propagate to the attached model.  This one is likely user error, but after playing around with it for a few days, I am still no further ahead.

 

3- Bones don’t have direction.  Well, that’s not accurate, bones don’t point in a direction.  Here for example is a sample hierarchy of bones exported in text readable format:

{"id": "Armature",
   "rotation": [-0.497507, 0.502480, -0.502480, 0.497507],
   "scale": [ 1.000000, 1.000000, 1.000000],
   "translation": [ 0.066099, 0.006002, -0.045762],
   

   "children": [
      {"id": "Bone_Torso",
      "rotation": [-0.498947, 0.501051, -0.501042, -0.498955],
      "scale": [ 1.000000, 1.000000, 1.000000],
   

      "children": [
         { "id": "Bone_UpArm",
        "rotation": [ 0.007421, -0.000071, -0.009516, 0.999927],
        "scale": [ 1.000000, 1.000000, 1.000000],
        "translation": [ 1.728194, 0.000000, -0.000000],

      "children": [
         {"id": "Bone_LowArm",
         "rotation": [ 0.999846, 0.012394, -0.000030, 0.012397],
         "scale": [ 1.000000, 1.000000, 1.000000],
         "translation": [ 1.663039, 0.000000, 0.000000],

      "children": [
         {"id": "Bone_Hand",
         "rotation": [ 0.999737, -0.016222, -0.000425, 0.016225],
         "scale": [ 1.000000, 1.000000, 1.000000],
         "translation": [ 1.676268, 0.000000, -0.000000]}
      ]}
   ]}
]}

As you can see, you have the Armature, which is basically the root of the skeleton.  It’s translation is the start location of the bones in local space ( of the model ).  Each bone within is therefore offset from the bone above it in the hierarchy.  What is lacking ( for me ), is a direction.  As an aside, I can actually confer the direction using the distance between (in this example) Bone_Hand and Bone_LowArm.  This of course requires two bones on both the parent and child models.

 

Consider this bad drawing.

 

You have a tank and a turret like so, with the red dots representing bone positions:

 

Tankexample1

 

So long as your tank and turret are modelled with consistent coordinate systems ( aka, both same scale, same facings, etc ), you can simply locate the tank bone and offset the turret relative, like so:

Tankexample2

 

In this simple example, it works perfectly!

 

Let’s consider a slightly more advanced situation however. Lets say we are diehard Warhammer fans and want to add sponsons to the side of our tank.

Once again, by modelling consistently with the tank and sponson, we can get:

TankExample3

 

Yay future tank now looks a hell of a lot like the world’s first tank…  but this is pretty much what we want.  Now lets mount one to the other side of the tank.

 

TankExample4

 

Ah shi… that’s not right.  Our second gun is INSIDE the tank.

 

The problem here is, we can’t depend on the coordinates of the both models matching up anymore.  We need to figure out direction when mounting the new weapon to the tank.  This means we need a facing direction for the bone, and this is something we don’t have.

 

Again, this problem has proven trickier then expected.  I will share some of my failed approaches shortly as this post got pretty long.

Design




Is it worthwhile to promote tweets to promote your game?

31. January 2014

 

When I started Flexamail ages ago, I created a Twitter account and did all the social media promotional stuff you were supposed to do.  Early on, we caught some pretty positive press ( LifeHacker, MakeUseOf ) and it lead to a huge amount of exposure.  Then the Twitter-verse started promoting it.  At one point, I had a Twitterimage user with over 4 million followers tweeted a link to Flexamail and it was retweeted an amazing 4,000 times!  I was expecting the traffic to come rolling in!

 

To put it mildly, I was disappointed with the results.  A totally of perhaps 45 million people received a tweet with a link to my site… how many link clicks?  About 3000.  Of those 3000, it resulted in about 200 account sign ups.  For some perspective the LifeHacker link resulted in tens of thousands of clicks and I would estimate 4 – 5K sign ups.  At this point in time I decided the Twitter was a bunch of people talking and very few people listening.

 

Therefore a couple years ago when I started GameFromScratch  I pretty much ignored Twitter completely until about a year ago and what an idiot I was.  It’s all a matter of targeting I suppose.  Now if someone posts a link to GameFromScratch on Twitter, from my very unscientific observation, I would say about 20% of a persons followers follow the link on average.  So for example, if someone with a thousand follower posts a link, it results in about 200 visits, which is pretty darned good.  A stranger phenomenon, the person with less followers tends to have a much higher click through.  So if someone has around 100 followers, you would often see almost half click a given link.

 

According to Google Analytics, a random sampling of social traffic breaks down like:

image

 

Obviously Reddit tops the list.  Reddit is a wonderful place to post for a content site like GameFromScratch, but I really don't recommend it for plugging a product, there is a sincere distaste towards those kinds of posts.  That is unless of course you share genuinely useful information, then you are loved.  Postmortems and sharing sales data are always welcome and could be a huge traffic boost.  Or of course you could consider a promoted reddit post, something I intend to explore at some point.

 

Next in traffic though is Twitter, followed by Facebook and the oh so random StumbleUpon ( about a year ago got 18,000 visits from SU in a single day! ).  So Twitter is certainly worth considering.

 

Which got me to wondering, when I looked over at the side of Twitter earlier day, there was a link for, well, basically paying for Twitter followers from Twitter ( not those shady buy-a-thousand-followers-for-5$ services ).  I got curious, could this be an effective way to promote a game ( or in my case, book )?  So I took a look.

 

The form itself is clever.  You say what regions and genders you want to target, what type of followers you would like to target ( pick someone famous, or use your own followers as the example ).  Then you can select a Tweet from your history that can be promoted.  You also decide whether you would like to pay per new follower, or pay per action ( retweet, favoriting, etc ).  It all sounded pretty good… great in fact.

 

Then it came down to pricing.  You can set a maximum budget and a daily budget.  I went with 20$ and 20$, so basically I was saying I am willing to pay a total of 20$.  The fact it let me go in with so low of an amount is certainly good for those of us on a smaller budget.  Next up came the bid… this is where you say how much you are willing to bid for your account to be promoted.  This works just like banner ads, basically you say “I am wiling to pay X amount to show my Twitter profile” when someone matching your target demographic views Twitter.  If you are the high bid, you are shown, if you aren’t, you aren’t ( and it costs you nothing ).  Then it all falls apart!

image

 

Suggested bid… $2.50 to $3.50!!!!  Three bucks a follower?  Seriously???  That would mean it would cost me $2,500 to get to where I am now!

 

Ouch.  Maybe for large companies with huge budgeting, this is worthwhile.  In fact, it is probably cheap compared to other forms of advertising.  For example, if Coke was running a Twitter campaign, 3$ a follower is probably dirt cheap compared to say… a SuperBowl spot.  But for a small developer hoping to promote a game, good god no!

 

I am mildly curious to see what happens if you do a 1 cent bid, Twitter’s suggestions be damned!  At 3 bucks a follower though, is it worthwhile?  No, not really.

Totally Off Topic




Vertex 2 game art e-book released. It is completely free if you can download it.

28. January 2014

 

Back in August of 2012 we reported in a free PDF made available by Ryan Hawkins called Vertex.  It was a high detail guide to game art from various industry artists… oh, and it was completely free!

Now, Vertex2 has been released!

Photo: VERTEX 2 IS OFFICIALLY OUT!!!!! Share this link with your Facebook friends and please like us if you have not done so yet. We hope that you enjoy the second volume in the VERTEX series.

On our website below please visit the downloads section and download either book one or book two. Both are great reads and are unique to one another content wise. http://artbypapercut.com/

 

Basically, it’s more of the same!  Erm, I think.  Reality is, I haven’t been able to download it, their website is down.  Apparently hosting a large downloadable file on a sub-standard host isn’t a great idea.

 

You can keep trying that link above, or hopefully I will locate a mirror and share it here.  If you have a mirror, let me know and I will post it!  Once you do in fact get a download of the book, if you like it, be sure to like them on their facebook page or consider using the donation link at the end of the book.  Awesome high quality free content is certainly worth rewarding!

Art ,




Rendering a 3D model to texture in LibGDX

21. January 2014

 

Back in this post I discussed ways of making dynamically equipped 2D sprites.  One way was to render out a 3D object to 2D textures dynamically.  So far we have looked at working in 3D in LibGDX, then exporting and rendering an animated model from Blender, the next step would be a dynamic 3D object to texture.  At first glance I thought this would be difficult, but reality is, it was stupidly simple.  In fact, the very first bit of code I wrote simply worked!  Well, except the results being upside down I suppose… details details…

 

Anyways, that is what this post discusses.  Taking a 3D scene in LibGDX and rendering in a 2D texture.  The code is just a modification of the code from the Blender to GDX post from a couple days back.

 

package com.gamefromscratch;

import com.badlogic.gdx.ApplicationListener;
import com.badlogic.gdx.Files.FileType;
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.InputProcessor;
import com.badlogic.gdx.graphics.GL10;
import com.badlogic.gdx.graphics.PerspectiveCamera;
import com.badlogic.gdx.graphics.Pixmap.Format;
import com.badlogic.gdx.graphics.Texture;
import com.badlogic.gdx.graphics.g2d.SpriteBatch;
import com.badlogic.gdx.graphics.g2d.TextureRegion;
import com.badlogic.gdx.graphics.g3d.Environment;
import com.badlogic.gdx.graphics.g3d.Model;
import com.badlogic.gdx.graphics.g3d.ModelBatch;
import com.badlogic.gdx.graphics.g3d.ModelInstance;
import com.badlogic.gdx.graphics.g3d.attributes.ColorAttribute;
import com.badlogic.gdx.graphics.g3d.loader.G3dModelLoader;
import com.badlogic.gdx.utils.UBJsonReader;
import com.badlogic.gdx.graphics.g3d.utils.AnimationController;
import com.badlogic.gdx.graphics.g3d.utils.AnimationController.AnimationDesc;
import com.badlogic.gdx.graphics.g3d.utils.AnimationController.AnimationListener;
import com.badlogic.gdx.graphics.glutils.FrameBuffer;



public class ModelTest implements ApplicationListener, InputProcessor {
    private PerspectiveCamera camera;
    private ModelBatch modelBatch;
    private Model model;
    private ModelInstance modelInstance;
    private Environment environment;
    private AnimationController controller;
    private boolean screenShot = false;
    private FrameBuffer frameBuffer;
    private Texture texture = null;
    private TextureRegion textureRegion;
    private SpriteBatch spriteBatch;
    
    @Override
    public void create() {        
        // Create camera sized to screens width/height with Field of View of 75 degrees
        camera = new PerspectiveCamera(
                75,
                Gdx.graphics.getWidth(),
                Gdx.graphics.getHeight());
        
        // Move the camera 5 units back along the z-axis and look at the origin
        camera.position.set(0f,0f,7f);
        camera.lookAt(0f,0f,0f);
        
        // Near and Far (plane) represent the minimum and maximum ranges of the camera in, um, units
        camera.near = 0.1f; 
        camera.far = 300.0f;

        // A ModelBatch is like a SpriteBatch, just for models.  Use it to batch up geometry for OpenGL
        modelBatch = new ModelBatch();
        
        // Model loader needs a binary json reader to decode
        UBJsonReader jsonReader = new UBJsonReader();
        // Create a model loader passing in our json reader
        G3dModelLoader modelLoader = new G3dModelLoader(jsonReader);
        // Now load the model by name
        // Note, the model (g3db file ) and textures need to be added to the assets folder of the Android proj
        model = modelLoader.loadModel(Gdx.files.getFileHandle("data/benddemo.g3db", FileType.Internal));
        // Now create an instance.  Instance holds the positioning data, etc of an instance of your model
        modelInstance = new ModelInstance(model);

        //move the model down a bit on the screen ( in a z-up world, down is -z ).
        modelInstance.transform.translate(0, 0, -2);

        // Finally we want some light, or we wont see our color.  The environment gets passed in during
        // the rendering process.  Create one, then create an Ambient ( non-positioned, non-directional ) light.
        environment = new Environment();
        environment.set(new ColorAttribute(ColorAttribute.AmbientLight, 0.8f, 0.8f, 0.8f, 1.0f));
        
        // You use an AnimationController to um, control animations.  Each control is tied to the model instance
        controller = new AnimationController(modelInstance);  
        // Pick the current animation by name
        controller.setAnimation("Bend",1, new AnimationListener(){

            @Override
            public void onEnd(AnimationDesc animation) {
                // this will be called when the current animation is done. 
                // queue up another animation called "balloon". 
                // Passing a negative to loop count loops forever.  1f for speed is normal speed.
                controller.queue("Balloon",-1,1f,null,0f);
            }

            @Override
            public void onLoop(AnimationDesc animation) {
                // TODO Auto-generated method stub
                
            }
            
        });
        
        frameBuffer = new FrameBuffer(Format.RGB888,Gdx.graphics.getWidth(),Gdx.graphics.getHeight(),false);
        Gdx.input.setInputProcessor(this);
        
        spriteBatch = new SpriteBatch();
    }

    @Override
    public void dispose() {
        modelBatch.dispose();
        model.dispose();
    }

    @Override
    public void render() {
        // You've seen all this before, just be sure to clear the GL_DEPTH_BUFFER_BIT when working in 3D
        Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
        Gdx.gl.glClearColor(1, 1, 1, 1);
        Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);

        // When you change the camera details, you need to call update();
        // Also note, you need to call update() at least once.
        camera.update();
        
        // You need to call update on the animation controller so it will advance the animation.  Pass in frame delta
        controller.update(Gdx.graphics.getDeltaTime());
        
        
        // If the user requested a screenshot, we need to call begin on our framebuffer
        // This redirects output to the framebuffer instead of the screen.
        if(screenShot)
            frameBuffer.begin();

        // Like spriteBatch, just with models!  pass in the box Instance and the environment
        modelBatch.begin(camera);
        modelBatch.render(modelInstance, environment);
        modelBatch.end();
        
        // Now tell OpenGL that we are done sending graphics to the framebuffer
        if(screenShot)
        {
            frameBuffer.end();
            // get the graphics rendered to the framebuffer as a texture
            texture = frameBuffer.getColorBufferTexture();
            // welcome to the wonderful world of different coordinate systems!
            // simply put, the framebuffer is upside down to normal textures, so we have to flip it
            // Use a TextureRegion to do so
            textureRegion = new TextureRegion(texture);
            // and.... FLIP!  V (vertical) only
            textureRegion.flip(false, true);
        }
        
        // In the case that we have a texture object to actually draw, we do so
        // using the old familiar SpriteBatch to do so.
        if(texture != null)
        {
            spriteBatch.begin();
            spriteBatch.draw(textureRegion,0,0);
            spriteBatch.end();
            screenShot = false;
        }
    }

    @Override
    public void resize(int width, int height) {
    }

    @Override
    public void pause() {
    }

    @Override
    public void resume() {
    }

    @Override
    public boolean keyDown(int keycode) {
        // TODO Auto-generated method stub
        return false;
    }

    @Override
    public boolean keyUp(int keycode) {
        // TODO Auto-generated method stub
        return false;
    }

    @Override
    public boolean keyTyped(char character) {
        // If the user hits a key, take a screen shot.
        this.screenShot = true;
        return false;
    }

    @Override
    public boolean touchDown(int screenX, int screenY, int pointer, int button) {
        // TODO Auto-generated method stub
        return false;
    }

    @Override
    public boolean touchUp(int screenX, int screenY, int pointer, int button) {
        // TODO Auto-generated method stub
        return false;
    }

    @Override
    public boolean touchDragged(int screenX, int screenY, int pointer) {
        // TODO Auto-generated method stub
        return false;
    }

    @Override
    public boolean mouseMoved(int screenX, int screenY) {
        // TODO Auto-generated method stub
        return false;
    }

    @Override
    public boolean scrolled(int amount) {
        // TODO Auto-generated method stub
        return false;
    }
}

And… that’s it.

Run the code and you get the exact same results as the last example:

blenderAnimation

 

However, if you press any key, the current frame is saved to a texture, and that is instead displayed on screen.  Press a key again and it will update to the current frame:

image

Of course, the background colour is different, because we didn’t implicitly set one.  The above a is LibGDX Texture object, which can now be treated as a 2D sprite, used as a texture map, whatever.

 

The code is ultra simple.  We have a toggle variable screenShot, that gets set if a user hits a key.  The actual process of rendering to texture is done with the magic of FrameBuffersl  Think of a framebuffer as a place for OpenGL to render other than your video card.  So instead of drawing the graphics to the screen, it instead draws the graphics to a memory buffer.  We then get this memory buffer as a texture using getColorBufferTexture().  The only complication is the frame buffer is rendered upside down.  This is easily fixed by wrapping the Texture in a TextureRegion and flipping the V coordinate.  Finally we display our newly generated texture using our old friend, the SpriteBatch.

 

 

Gotta love it when something you expect to be difficult ends up being ultra easy.  Next I have to measure the performance, so see if this is something that can be done in a realtime situation, or do we need to do this onload/change?

Programming , , ,




3D models and animation from Blender to LibGDX

19. January 2014

 

Let me start off by saying that exporting from Blender is always a pain in the ass.  This experience didn’t prove to be an exception.  I will describe the process in some detail.

 

First and foremost, Blender 2.69 DIDNT work.  At least not for me, not matter what I did, Blender 2.69 would not export texture information in FBX format.  This cost me many hours of my life.  Once I switched to 2.68 everything worked.  Your mileage may vary, but in my experience, 2.69 simply would not work.  Another thing, something I lost another couples hours to… you need to use GL2 in LibGDX!

 

A lot of this process takes place on the Blender end.  I obviously don’t go into depth in how to use Blender.  If you are completely new to Blender, I highly suggest you run through this tutorial, it will teach you pretty much everything you need to know to follow along.

 

STEP 1: Model your … model

 

This step is pretty straight forward, you need to model an object to be exported.  I created this:

image

I took a standard cube, went in edit mode, extrude the face, select all and sub-divided it.

 

STEP 2: Texture your model

 

Not complete sure why, but every model that is exported to FBX seems to require a texture.  Once again, YOU NEED A TEXTURE.

Next, FBX does NOT include the texture information.  This means you have to save your texture to an image file, and add that image to the assets folder of your project.

You need to UV Unwrap your object.  In Edit mode you can accomplish this by selecting all, then UV Unwrap->Smart UV Project.  In the UV/Image window, it should look like:

image

There are a couple limitations here.  You need to make sure you texture face enabled:

image

With your object selected, in materials, make sure Face Textures is selected.

 

Next be sure to select a texture:

image

Type is Image, then under the Image section, select your texture image.  This is the file that needs to be added to your assets folder.

Scroll down to Mapping, change Coordinates to UV, select your UVMap from the drop down, leave Project as flat:

image

 

At this point you can check if your texture is right by switching over to GLSL in the properties panel ( hit N key ) of the 3D View.

 

STEP 3: Set up your rig

 

This part is a bit tricky if you’ve never done it before.  You can create a series of named animations that will be exported.  You animate by setting key frames.  First you need to setup up your skeleton.  Simply select Add->Armature->Single Bone.

image

 

Position the bone within your model.  Switch into edit mode and select the end “knob” of the bode and select Extrude ( E ) to create more bones.  Repeat until it looks like this:  ( FYI, you can use Z to make a model see through, handy when placing bones ).

image

Next you want to set the bones to manipulate the underlying mesh.

In Object mode, select your mesh, then shift click select the underlying bones.  Now parent them by hit CTRL + P and select “With automatic weight”

image

 

STEP 4: Animate your model

 

Now you can set up your animations.  Since we want multiple animations in the same file we are doing this slightly differently.

First set up your duration.  In the timeline, set end to the last frame of your longest animation:

image

 

 

Bring up a dopesheet view, then select the Action Editor:

image

 

Click the + icon, or new button ( depending if you have any existing animations, your options will change )

image

Name the animation.

 

Now go to your first frame ( Frame 1 using the controls in timeline ).

Select your bone/armature and switch to POSE mode.  Press A to select all bones.

Create a keyframe by hitting I ( as in “eye” ) then in the resulting menu select LocRotScale ( as in, Location, Rotation and Scale ), which will create a keyframe tracking those three things.

image

 

Now advance to the next frame you want to create a keyframe upon, rotate, scale or move your bone, then select all bones again and create another key.

You can create multiple named animation using the same process.  SImply click the + Icon, name another animation, go back to frame 1, position it, set a keyframe, animate, setting key frames.

 

STEP 5: Exporting to FBX

 

This part is pretty hit or miss.  You have a couple options, you can select just your mesh and armature, or simply export everything.

Then select File->Export->Autodesk FBX:

image

 

The documents say to stay with the default axis settings and the FBX converter will take care of the rest.  Frankly I could never get this to work.

These are the settings I am currently using:

image

 

STEP 6: Convert to 3dgb format and add to Eclipse

Open a command prompt where ever you exported the FBX.  Make sure your texture file is there as well.  If you haven’t already, make sure to download fbx-conv, extract that file to the directory you exported your FBX to.  Open a command prompt and CD to that directory.  Next type:

fbx-conv-win32 –f yourfilename.fbx

image

 

This should generate a g3db file.  Copy it and your textures to the assets/data directory of your android project:

image

 

STEP 7: The code

 

This code demonstrates how to load a 3D model and play multiple animations.  There is no description, beyond the code comments.  If you have any questions the comments dont cover, ask them below!

 

package com.gamefromscratch;

import com.badlogic.gdx.ApplicationListener;
import com.badlogic.gdx.Files.FileType;
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.graphics.GL10;
import com.badlogic.gdx.graphics.PerspectiveCamera;
import com.badlogic.gdx.graphics.g3d.Environment;
import com.badlogic.gdx.graphics.g3d.Model;
import com.badlogic.gdx.graphics.g3d.ModelBatch;
import com.badlogic.gdx.graphics.g3d.ModelInstance;
import com.badlogic.gdx.graphics.g3d.attributes.ColorAttribute;
import com.badlogic.gdx.graphics.g3d.loader.G3dModelLoader;
import com.badlogic.gdx.utils.UBJsonReader;
import com.badlogic.gdx.graphics.g3d.utils.AnimationController;
import com.badlogic.gdx.graphics.g3d.utils.AnimationController.AnimationDesc;
import com.badlogic.gdx.graphics.g3d.utils.AnimationController.AnimationListener;



public class ModelTest implements ApplicationListener {
    private PerspectiveCamera camera;
    private ModelBatch modelBatch;
    private Model model;
    private ModelInstance modelInstance;
    private Environment environment;
    private AnimationController controller;
    
    @Override
    public void create() {        
        // Create camera sized to screens width/height with Field of View of 75 degrees
        camera = new PerspectiveCamera(
                75,
                Gdx.graphics.getWidth(),
                Gdx.graphics.getHeight());
        
        // Move the camera 5 units back along the z-axis and look at the origin
        camera.position.set(0f,0f,7f);
        camera.lookAt(0f,0f,0f);
        
        // Near and Far (plane) represent the minimum and maximum ranges of the camera in, um, units
        camera.near = 0.1f; 
        camera.far = 300.0f;

        // A ModelBatch is like a SpriteBatch, just for models.  Use it to batch up geometry for OpenGL
        modelBatch = new ModelBatch();
        
        // Model loader needs a binary json reader to decode
        UBJsonReader jsonReader = new UBJsonReader();
        // Create a model loader passing in our json reader
        G3dModelLoader modelLoader = new G3dModelLoader(jsonReader);
        // Now load the model by name
        // Note, the model (g3db file ) and textures need to be added to the assets folder of the Android proj
        model = modelLoader.loadModel(Gdx.files.getFileHandle("data/blob.g3db", FileType.Internal));
        // Now create an instance.  Instance holds the positioning data, etc of an instance of your model
        modelInstance = new ModelInstance(model);
        
        //fbx-conv is supposed to perform this rotation for you... it doesnt seem to
        modelInstance.transform.rotate(1, 0, 0, -90);
        //move the model down a bit on the screen ( in a z-up world, down is -z ).
        modelInstance.transform.translate(0, 0, -2);

        // Finally we want some light, or we wont see our color.  The environment gets passed in during
        // the rendering process.  Create one, then create an Ambient ( non-positioned, non-directional ) light.
        environment = new Environment();
        environment.set(new ColorAttribute(ColorAttribute.AmbientLight, 0.8f, 0.8f, 0.8f, 1.0f));
        
        // You use an AnimationController to um, control animations.  Each control is tied to the model instance
        controller = new AnimationController(modelInstance);  
        // Pick the current animation by name
        controller.setAnimation("Bend",1, new AnimationListener(){

            @Override
            public void onEnd(AnimationDesc animation) {
                // this will be called when the current animation is done. 
                // queue up another animation called "balloon". 
                // Passing a negative to loop count loops forever.  1f for speed is normal speed.
                controller.queue("balloon",-1,1f,null,0f);
            }

            @Override
            public void onLoop(AnimationDesc animation) {
                // TODO Auto-generated method stub
                
            }
            
        });
    }

    @Override
    public void dispose() {
        modelBatch.dispose();
        model.dispose();
    }

    @Override
    public void render() {
        // You've seen all this before, just be sure to clear the GL_DEPTH_BUFFER_BIT when working in 3D
        Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
        Gdx.gl.glClearColor(1, 1, 1, 1);
        Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
        
        // For some flavor, lets spin our camera around the Y axis by 1 degree each time render is called
        //camera.rotateAround(Vector3.Zero, new Vector3(0,1,0),1f);
        // When you change the camera details, you need to call update();
        // Also note, you need to call update() at least once.
        camera.update();
        
        // You need to call update on the animation controller so it will advance the animation.  Pass in frame delta
        controller.update(Gdx.graphics.getDeltaTime());
        // Like spriteBatch, just with models!  pass in the box Instance and the environment
        modelBatch.begin(camera);
        modelBatch.render(modelInstance, environment);
        modelBatch.end();
    }

    @Override
    public void resize(int width, int height) {
    }

    @Override
    public void pause() {
    }

    @Override
    public void resume() {
    }
}

 

Now if you run it, you should see:

blenderAnimation

 

And that, is how you get a textured animated 3D model from Blender to LibGDX.

 

VIDEO EDITION!

 

Ok, that might have been a bit confusing, so I decided to screen capture the entire process.  The following video shows creating, texturing, animating then exporting a model from Blender to LibGDX.  There is no instruction beyond what is above, but it might help you seeing the entire process, especially if something I said above didn’t make sense, or if your own export doesn’t work out.

 

The video on YouTube is much higher resolution than the embedded one below.

 

Let me know if you have any questions.

Programming, Art , , ,