Subscribe to GameFromScratch on YouTube Support GameFromScratch on Patreon
18. March 2014


With GDC going on it’s no surprise to hear a number of product announcement.  Today Autodesk announced the annual refresh of almost all of their game related technologies including Maya and Maya LT, Max, MotionBuilder, Mudbox and Softimage. 


From the official press release here are the major new features for each product:

Autodesk Maya 2015 software adds new capabilities to the toolset such as the new Bifrost
procedural effects platform which provides an extensible, artist-friendly workflow for complex
simulation and rendering tasks, initially applied to near photorealistic liquids; XGen Arbitrary
Primitive Generator for the easy creation of richly detailed geometry such as hair, fur, and foliage; 
Geodesic Voxel Binding method for skinning characters; ShaderFX, a new node-based visual
interface for shader programing; support for Pixar’s OpenSubdiv libraries; enhanced polygon
modeling tools; and expanded UV options;

Autodesk 3ds Max 2015 software has been extended and redesigned to help improve
performance, ease-of-use and management of complex scenes. New in 2015 is ShaderFX, a new
node-based visual interface that allows game artists and programmers to more easily create
advanced HLSL viewport shaders; point cloud dataset support for reality capture workflows; new
viewport performance optimizations; a redesigned scene explorer to make it easier for artists to
manage large scenes; ActiveShade support for the NVIDIA mental ray renderer; and new Python
scripting support – a highly requested user feature for pipeline integration; 

Autodesk MotionBuilder 2015 provides several features that advance motion capture workflow
accessibility such as: a new plug-in for Microsoft Kinect to help capture body movements for use
in MotionBuilder, Animatable Depth of Field and Follow Focus camera options to recreate
elements of real-world cinematography, a robust content library with 100 commonly required
character animations in the Autodesk FBX®
format and flexible marker assignment to adjust
character positions;

Autodesk Mudbox 2015 software boasts streamlined mesh refinement for retopologizing and new
Sculpt Layer and Paint Layer groups for organizing and identifying particular layers in complex
scenes. The release also has advanced interoperability with Maya 2015, an enhanced texture
export and updating workflow, new caliper tool and support for Intel HD graphics 4000 on
compatible Windows 8 operating system hybrid tablet/PCs;

Autodesk Softimage 2015* software helps streamline 3D asset creation and management with
Alembic caching, enhancements to the ICE platform and animatable weight maps in Syflex cloth.

Autodesk Maya LT 2015 Software  Streamlines Indie Game Development

Maya LT 2015, the latest iteration of Autodesk’s cost-effective 3D animation and modeling software for
professional indie game makers, introduces a series of rich new features and integrations that help
advance the 3D content creation process for indie game development.

The updated application has:

  • Cloud integration allows artists to browse, open, modify and save Dropbox or Autodesk 360 files to the cloud directly through the Maya LT interface. Leverage 123D Catch or 123D Creature files saved in Autodesk’s 123D cloud storage as a reference for creating game assets in Maya LT;
  • Unfold 3D helps facilitate the seamless creation of UV maps from 3D models;
  • Substance Material Integration allows users to apply materials created in the Allegorithmic Substance Designer procedural texture creation tool to 3D models

In addition to the new features, Maya LT 2015 also has the extension releases of Maya LT 2014, such as:
support for MEL scripting, a send-to-Unity workflow, uncapped polygon export to Unity, the ability to
export models or scenes up to 65,000 polygons in the FBX or OBJ formats, Human IK and IK Handle
Animation, and Boolean operations on polygon geometry.


Notice the little asterisk beside Softimage 2015?  Well, here is the fine print.

* Editor’s Note: Softimage 2015 will be the final new release of this product.


So there you have it, Autodesk finally killed it off.  I think the writing has been on the wall for a long time, but it still sad to see an old friend go.


21. February 2014



Indie developers are increasingly purchasing “off the shelf’ assets to ease the workload on their game project.  The popularity of resources like the Unity Asset Store, Turbo Squid and Mixamo are certainly proof.  These resources are especially useful for the artistically challenged developers amongst us.  Now, Autodesk is throwing their hat into the ring with Character Generator.



What is Character Generator?  In their own words:

Drastically reduce the time needed to create customized, rigged and ready-to-animate 3D characters with Autodesk® Character Generator; a new, easy-to-use, web-based service. With Character Generator, users have control over a character’s body, face, clothes and hair, and can then generate their customized character for use in popular animation packages: Autodesk® Maya®, Autodesk® Maya LT™, and Autodesk® 3ds Max® software as well as in game engines like Unity.



Basically you use a number of pre made components to generate models for export to Maya, Max and Unity.  ( Why no Softimage love? )


So, you pick a character:



Refine the body.



Add details/accesories:



And export as an FBX or Maya file:



It is available in two forms, paid and free.  The cost seems tied to the complexity of the model you’ve created.  Free versions obviously have some limitations, as shown on this (somewhat odd) chart below.  I am assuming the lack of checkmarks on the paid side was a mistake on Autodesks part. :)



Exported models are rigged with a HumanIK rig.  Perhaps the most noticeable difference between Free and Paid is the free version is limited to low quality models.  That’s a bit of a loaded expression, as what do they mean by “quality”?  If they simply mean polygon, for many people that isn’t a huge drawback. 


Then again, you can try it completely free, so what have you got to lose?  I glossed over a great deal of functionality in this post, so if you are interested, you should check out the Autodesk product page.


A few questions still remain for me.  If you are using an Autodesk toolchain, trying this out is a no brainer.  But if you are using other tools like Blender or Modo, how well does this slot into your pipeline?  How well does a HumanIK rig work in Unity, or does it work at all?  Im going to try and get back to you.  If you’ve tried it with a non-Autodesk toolchain, how was your experience?

17. February 2014


OpenTK, a low level C# binding for the OpenGL, OpenAL and OpenCL has just hit a milestone 1.1 release.  It’s a project used behind the scenes by a number ofimage projects such as MonoGame.  Funny enough, they keep a low enough profile everyone always thinks they are dead!  Fortunately for .NET loving OpenGL fans, they are not.



This release brings a number of new goodies, including:

1. support for OpenGL 4.4 and OpenGL ES 3.0
2. strongly-typed enums for OpenGL ES 2.0 and 3.0
3. new, faster OpenGL bindings based on hand-optimized IL
4. a new SDL2 backend for improved platform compatibility
5. new Joystick and GamePad APIs under OpenTK.Input
6. improved startup time and reduced memory consumption
7. inline documentation for all OpenGL and OpenGL ES core functions
8. a greatly expanded math library
9. numerous bugfixes for Mac OS X, Windows 8 and Linux
10. ANGLE support for Windows systems without OpenGL drivers
11. support for Retina / high-DPI monitors
12. monolinker can now be used to reduce the size of OpenTK.dll
13. precompiled binaries for optional dependencies (OpenAL, SDL2, monolinker)


You can read the full release notes here and download the full package here.  OpenTK is an open source project hosted on Github here.


28. January 2014


Back in August of 2012 we reported in a free PDF made available by Ryan Hawkins called Vertex.  It was a high detail guide to game art from various industry artists… oh, and it was completely free!

Now, Vertex2 has been released!

Photo: VERTEX 2 IS OFFICIALLY OUT!!!!! Share this link with your Facebook friends and please like us if you have not done so yet. We hope that you enjoy the second volume in the VERTEX series.

On our website below please visit the downloads section and download either book one or book two. Both are great reads and are unique to one another content wise.


Basically, it’s more of the same!  Erm, I think.  Reality is, I haven’t been able to download it, their website is down.  Apparently hosting a large downloadable file on a sub-standard host isn’t a great idea.


You can keep trying that link above, or hopefully I will locate a mirror and share it here.  If you have a mirror, let me know and I will post it!  Once you do in fact get a download of the book, if you like it, be sure to like them on their facebook page or consider using the donation link at the end of the book.  Awesome high quality free content is certainly worth rewarding!


21. January 2014


Back in this post I discussed ways of making dynamically equipped 2D sprites.  One way was to render out a 3D object to 2D textures dynamically.  So far we have looked at working in 3D in LibGDX, then exporting and rendering an animated model from Blender, the next step would be a dynamic 3D object to texture.  At first glance I thought this would be difficult, but reality is, it was stupidly simple.  In fact, the very first bit of code I wrote simply worked!  Well, except the results being upside down I suppose… details details…


Anyways, that is what this post discusses.  Taking a 3D scene in LibGDX and rendering in a 2D texture.  The code is just a modification of the code from the Blender to GDX post from a couple days back.


package com.gamefromscratch;

import com.badlogic.gdx.ApplicationListener;
import com.badlogic.gdx.Files.FileType;
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.InputProcessor;
import com.badlogic.gdx.utils.UBJsonReader;

public class ModelTest implements ApplicationListener, InputProcessor {
    private PerspectiveCamera camera;
    private ModelBatch modelBatch;
    private Model model;
    private ModelInstance modelInstance;
    private Environment environment;
    private AnimationController controller;
    private boolean screenShot = false;
    private FrameBuffer frameBuffer;
    private Texture texture = null;
    private TextureRegion textureRegion;
    private SpriteBatch spriteBatch;
    public void create() {        
        // Create camera sized to screens width/height with Field of View of 75 degrees
        camera = new PerspectiveCamera(
        // Move the camera 5 units back along the z-axis and look at the origin
        // Near and Far (plane) represent the minimum and maximum ranges of the camera in, um, units
        camera.near = 0.1f; 
        camera.far = 300.0f;

        // A ModelBatch is like a SpriteBatch, just for models.  Use it to batch up geometry for OpenGL
        modelBatch = new ModelBatch();
        // Model loader needs a binary json reader to decode
        UBJsonReader jsonReader = new UBJsonReader();
        // Create a model loader passing in our json reader
        G3dModelLoader modelLoader = new G3dModelLoader(jsonReader);
        // Now load the model by name
        // Note, the model (g3db file ) and textures need to be added to the assets folder of the Android proj
        model = modelLoader.loadModel(Gdx.files.getFileHandle("data/benddemo.g3db", FileType.Internal));
        // Now create an instance.  Instance holds the positioning data, etc of an instance of your model
        modelInstance = new ModelInstance(model);

        //move the model down a bit on the screen ( in a z-up world, down is -z ).
        modelInstance.transform.translate(0, 0, -2);

        // Finally we want some light, or we wont see our color.  The environment gets passed in during
        // the rendering process.  Create one, then create an Ambient ( non-positioned, non-directional ) light.
        environment = new Environment();
        environment.set(new ColorAttribute(ColorAttribute.AmbientLight, 0.8f, 0.8f, 0.8f, 1.0f));
        // You use an AnimationController to um, control animations.  Each control is tied to the model instance
        controller = new AnimationController(modelInstance);  
        // Pick the current animation by name
        controller.setAnimation("Bend",1, new AnimationListener(){

            public void onEnd(AnimationDesc animation) {
                // this will be called when the current animation is done. 
                // queue up another animation called "balloon". 
                // Passing a negative to loop count loops forever.  1f for speed is normal speed.

            public void onLoop(AnimationDesc animation) {
                // TODO Auto-generated method stub
        frameBuffer = new FrameBuffer(Format.RGB888,,,false);
        spriteBatch = new SpriteBatch();

    public void dispose() {

    public void render() {
        // You've seen all this before, just be sure to clear the GL_DEPTH_BUFFER_BIT when working in 3D, 0,,;, 1, 1, 1); | GL10.GL_DEPTH_BUFFER_BIT);

        // When you change the camera details, you need to call update();
        // Also note, you need to call update() at least once.
        // You need to call update on the animation controller so it will advance the animation.  Pass in frame delta
        // If the user requested a screenshot, we need to call begin on our framebuffer
        // This redirects output to the framebuffer instead of the screen.

        // Like spriteBatch, just with models!  pass in the box Instance and the environment
        modelBatch.render(modelInstance, environment);
        // Now tell OpenGL that we are done sending graphics to the framebuffer
            // get the graphics rendered to the framebuffer as a texture
            texture = frameBuffer.getColorBufferTexture();
            // welcome to the wonderful world of different coordinate systems!
            // simply put, the framebuffer is upside down to normal textures, so we have to flip it
            // Use a TextureRegion to do so
            textureRegion = new TextureRegion(texture);
            // and.... FLIP!  V (vertical) only
            textureRegion.flip(false, true);
        // In the case that we have a texture object to actually draw, we do so
        // using the old familiar SpriteBatch to do so.
        if(texture != null)
            screenShot = false;

    public void resize(int width, int height) {

    public void pause() {

    public void resume() {

    public boolean keyDown(int keycode) {
        // TODO Auto-generated method stub
        return false;

    public boolean keyUp(int keycode) {
        // TODO Auto-generated method stub
        return false;

    public boolean keyTyped(char character) {
        // If the user hits a key, take a screen shot.
        this.screenShot = true;
        return false;

    public boolean touchDown(int screenX, int screenY, int pointer, int button) {
        // TODO Auto-generated method stub
        return false;

    public boolean touchUp(int screenX, int screenY, int pointer, int button) {
        // TODO Auto-generated method stub
        return false;

    public boolean touchDragged(int screenX, int screenY, int pointer) {
        // TODO Auto-generated method stub
        return false;

    public boolean mouseMoved(int screenX, int screenY) {
        // TODO Auto-generated method stub
        return false;

    public boolean scrolled(int amount) {
        // TODO Auto-generated method stub
        return false;

And… that’s it.

Run the code and you get the exact same results as the last example:



However, if you press any key, the current frame is saved to a texture, and that is instead displayed on screen.  Press a key again and it will update to the current frame:


Of course, the background colour is different, because we didn’t implicitly set one.  The above a is LibGDX Texture object, which can now be treated as a 2D sprite, used as a texture map, whatever.


The code is ultra simple.  We have a toggle variable screenShot, that gets set if a user hits a key.  The actual process of rendering to texture is done with the magic of FrameBuffersl  Think of a framebuffer as a place for OpenGL to render other than your video card.  So instead of drawing the graphics to the screen, it instead draws the graphics to a memory buffer.  We then get this memory buffer as a texture using getColorBufferTexture().  The only complication is the frame buffer is rendered upside down.  This is easily fixed by wrapping the Texture in a TextureRegion and flipping the V coordinate.  Finally we display our newly generated texture using our old friend, the SpriteBatch.



Gotta love it when something you expect to be difficult ends up being ultra easy.  Next I have to measure the performance, so see if this is something that can be done in a realtime situation, or do we need to do this onload/change?


AppGameKit Studio

See More Tutorials on!

Month List