Blender 2.69 released. What is in it for game developers?

1. November 2013

 

Blender announced the released of Blender 2.69 and now we are going to take a quick look at what is in it for game developers.

 

The biggest feature on that front is the ability to import FBX files, as well as export FBX and OBJ files with split normals.  As FBX support improves, it becomes easier and easier to slot Blender into a seamless multi application workflow.

 

The mesh bisect tool was added for quickly cutting an object in half:

File:Mesh bisect.png

 

Clean-up tool added for automatically detecting and fixing holes in a mesh.

Symmetrize was re-written and now preserves UV and mesh data.

Probably the biggest new feature was the addition of Hidden Wire display mode.  With this enabled, it will only show front facing wireframe:

File:View3d shading hidden-wire.png

A number of other small modeling changes.

 

Plane Tracking was added to the Motion Tracker, for replacing flat surface in a scene, such as a billboard.

File:Blender2.69 MotionTracker.png

 

As well, a number of improvements to the Cycles renderer.

 

All told, not a ton new in this update.

News, Art




LibGDX Tutorial 6: Motion controls

30. October 2013

In the previous tutorial we looked at handling touch and gesture events.  These days, most mobile devices have very accurate motion detection capabilities, which LibGDX fully supports.  In this example we will look at how to handle motion as well as detect if a device supports certain functionality and to detect which way the device is oriented.

 

This project revolves around a single code example, but there are some configuration steps you need to be aware of.

 

First off, in order to tell LibGDX that you want to use the compass and accelerometer, you need to pass that as part of the configuration in your Android MainActivity.  In the android project locate MainActivity.java and edit it accordingly:

package com.gamefromscratch;

 

import android.os.Bundle;

 

import com.badlogic.gdx.backends.android.AndroidApplication;

import com.badlogic.gdx.backends.android.AndroidApplicationConfiguration;

 

public class MainActivity extends AndroidApplication {

    @Override

    public void onCreate(Bundle savedInstanceState) {

        super.onCreate(savedInstanceState);

        

        AndroidApplicationConfiguration cfg = new AndroidApplicationConfiguration();

        cfg.useGL20 = true;

        cfg.useAccelerometer = true;

        cfg.useCompass = true;

        

        initialize(new MotionDemo(), cfg);

    }

}

 

The meaningful lines are 

cfg.useAccelerometer = true;

and

cfg.useCompass = true;

 

These lines tell LibGDX to enable both.

Next we need to make a couple of changes to your Android manifest.  This is a configuration file of sorts that tells the Android operating system how your application performs and what permissions it requires to run.  You could literally write an entire book about dealing with Android manifests, so if you want more information read here.  The manifest is located at the root of your Android project and is called AndroidManifest.xml.  There are a couple ways you can edit it.  Simply right click AndroidManifest.xml and select Open With->.

ManifestEditAs

 

I personally prefer to simply edit using the Text Editor, but if you want a more guided experience, you can select Android Manifest Editor, which brings up this window:

Java motion android AndroidManifest xml Eclipse Users Mike Dropbox Workspace

This is basically a GUI layer over top of the Android manifest.  Using the tabs across the bottom you can switch between the different categories and a corresponding form will appear.  If you click AndroidManifest.xml it will bring up a text view of the manifest.  Use whichever interface you prefer, it makes no difference in the end.

There are two changes we want to make to the manifest.  First we want the device to support rotation, so if the user rotates their device, the application rotates accordingly.  This is done by setting the property android:screenOrientation to fullsensor.  Next we want to grant the permission android.permission.VIBRATE.  If you do not add this permission calling a vibrate call will cause your application to crash!

 

Here is how my manifest looks with changes made:

<?xml version="1.0" encoding="utf-8"?>

<manifest xmlns:android="http://schemas.android.com/apk/res/android"

    package="com.gamefromscratch"

    android:versionCode="1"

    android:versionName="1.0" >

 

    <uses-sdk android:minSdkVersion="5" android:targetSdkVersion="17" />

    <uses-permission android:name="android.permission.VIBRATE"/>

 

    <application

        android:icon="@drawable/ic_launcher"

        android:label="@string/app_name" >

        <activity

            android:name=".MainActivity"

            android:label="@string/app_name"

            android:screenOrientation="fullSensor"

            android:configChanges="keyboard|keyboardHidden|orientation|screenSize">

            <intent-filter>

                <action android:name="android.intent.action.MAIN" />

                <category android:name="android.intent.category.LAUNCHER" />

            </intent-filter>

        </activity>

    </application>

 

</manifest>

The changes have been bolded above.  You want to be careful when you request additional permissions as they will be shown when the user installs your application.  Too many permissions and people start getting scared of your application.  Of course, if you need to do something that requires a permission there isn’t much you can do!  As to the screenOrientation value, this tells Android which direction to create your application as.  There are a number of options, Landscape and Portrait being two of the most common.  fullSensor basically means all directions supported.  This means you can rotate the device 360 degrees and it will be rotated accordingly.  On the other hand, if you select “user”, you cannot rotate the device 180 degrees, meaning you cannot use it upside down.  You can read more about the available properties in the link I provided earlier.

There is one last important thing to be aware of before moving on.  Your android project will actually have two AndroidManifest.xml files, one in the root directory another in the bin subfolder.  Be certain to use the one in the root directory, as the other one will be copied over!

 

Ok, now that we are fully configured, let’s jump into the code sample:

package com.gamefromscratch;

 

 

import com.badlogic.gdx.ApplicationListener;

import com.badlogic.gdx.Gdx;

import com.badlogic.gdx.Input.Orientation;

import com.badlogic.gdx.Input.Peripheral;

import com.badlogic.gdx.graphics.Color;

import com.badlogic.gdx.graphics.GL10;

import com.badlogic.gdx.graphics.g2d.BitmapFont;

import com.badlogic.gdx.graphics.g2d.SpriteBatch;

 

public class MotionDemo implements ApplicationListener {

private SpriteBatch batch;

private BitmapFont font;

private String message = "Do something already!";

private float highestY = 0.0f;

 

@Override

public void create() {

   batch = new SpriteBatch();

   font = new BitmapFont(Gdx.files.internal("data/arial-15.fnt"),false);

   font.setColor(Color.RED);

}

 

@Override

public void dispose() {

   batch.dispose();

   font.dispose();

}

 

@Override

public void render() {

   int w = Gdx.graphics.getWidth();

   int h = Gdx.graphics.getHeight();

   Gdx.gl.glClearColor(1, 1, 1, 1);

   Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT);

 

   batch.begin();

 

   int deviceAngle = Gdx.input.getRotation();

   Orientation orientation = Gdx.input.getNativeOrientation();

   float accelY = Gdx.input.getAccelerometerY();

   if(accelY > highestY)

      highestY = accelY;

 

   message = "Device rotated to:" + Integer.toString(deviceAngle) + " degrees\n";

   message += "Device orientation is ";

   switch(orientation){

      case Landscape:

         message += " landscape.\n";

         break;

      case Portrait:

         message += " portrait. \n";

         break;

      default:

         message += " complete crap!\n";

         break;

   }

 

   message += "Device Resolution: " + Integer.toString(w) + "," + Integer.toString(h) + "\n";

   message += "Y axis accel: " + Float.toString(accelY) + " \n";

   message += "Highest Y value: " + Float.toString(highestY) + " \n";

 

   if(Gdx.input.isPeripheralAvailable(Peripheral.Vibrator)){

      if(accelY > 7){

         Gdx.input.vibrate(100);

      }

   }

 

   if(Gdx.input.isPeripheralAvailable(Peripheral.Compass)){

      message += "Azmuth:" + Float.toString(Gdx.input.getAzimuth()) + "\n";

      message += "Pitch:" + Float.toString(Gdx.input.getPitch()) + "\n";

      message += "Roll:" + Float.toString(Gdx.input.getRoll()) + "\n";

   }

   else{

      message += "No compass available\n";

   }

 

   font.drawMultiLine(batch, message, 0, h);

 

   batch.end();

}

 

@Override

public void resize(int width, int height) {

   batch.dispose();

   batch = new SpriteBatch();

   String resolution = Integer.toString(width) + "," + Integer.toString(height);

   Gdx.app.log("MJF", "Resolution changed " + resolution);

}

 

@Override

public void pause() {

}

 

@Override

public void resume() {

}

 

}

 

When you run this program on a device, you should see:

Appresults

 

As you move the device, the various values will update.  If you raise your phone to be within about 30 degrees of completely upright it will vibrate.  Of course, this assumes that your device supports all these sensors that is!

 

The code itself is actually remarkably straight forward, LibGDX makes working with motion sensors remarkably easy, its just actually understanding the returned values that things get a bit more complicated.  The vast majority of the logic is in the render() method.  First we get the angle the device is rotated in.  This value is in degrees with 0 being straight in front of you parallel to your face.  One important thing to realize is this value will always have 0 as up, regardless to if you are in portrait or landscape mode.  This is something LibGDX does to make things easier for you, but is different behaviour than the Android norm.

Next we get the orientation of the device.  Orientation can be either landscape or portrait (wide screen vs tall screen).  Next we check the value of the accelerometer along the Y access using getAccelerometerY().  You can also check the accelerometer for movement in the X and Z axis using getAcceleromterX() and getAcceleromterZ() respectively.  Once again, LibGDX standardizes the axis directions, regardless to the devices orientation.  Speaking of which, Y is up.  The means if you hold your phone straight in front of you parallel to your face, the Y axis is what you would traditionally think of as up and down.  The Z axis would be in front of you, so if you made a push or pulling motion, this would be along the Z axis.  The X axis would track movements to the left and right.

So then, what exactly are the values returned by the accelerometer?  Well this part gets a bit confusing, as it measures both speed and position in a way.  If you hold your phone straight out in front of you, with the screen parallel to your face, it will return a value of 9.8.  That number should look familiar to you, it’s the speed a body falls due to gravity in meters per second.  Therefore if your phone is stationary and upright, its 9.8.  If you move the phone up parallel to your body, the value will rise above 9.8, the amount depends on how fast your are moving the phone.  Moving down on the other hand will return a value below 9.8.  If you put the phone down flat on a desk it will instead return 0. Flipping the phone upside down will instead return -9.8 if held stationary.  Obviously the same occurs along the X and Z axis, but instead that would indication motion left and right or in and out instead of up and down.

Ok, back to our code.  We check to see if the current accelY value is the highest and if it is, we record it to display.  Next we check what value the orientation returned and display the appropriate message.  We dump some information we’ve gathered out to be displayed on screen.  Next we make the very important call Gdx.input.isPeripheralAvailable().  This will return true if the users device supports the requested functionality.  First we check to see if the phone supports vibrating and if it does, we check if the phone is over 7.  Remember the value 9.8 represents straight up and down, so if its 7 or higher its within about 35 degrees of vertical.  If it is, we vibrate by calling vibrate(), the value passed is the number of milliseconds to vibrate for.

Next we check to see if the device has a compass.  If it does, you can check the position of the device relative to polar north.  Here are the descriptions of each value from Google’s documentation:

Azimuth, rotation around the Z axis (0<=azimuth<360). 0 = North, 90 = East, 180 = South, 270 = West
Pitch, rotation around X axis (-180<=pitch<=180), with positive values when the z-axis moves toward the y-axis.
Roll, rotation around Y axis (-90<=roll<=90), with positive values when the z-axis moves toward the x-axis.

You can read more about it here.

Finally we draw the message we have been composing on screen.

There is only one other very important thing to notice in this example:

public void resize(int width, int height) {

   batch.dispose();

   batch = new SpriteBatch();

   String resolution = Integer.toString(width) + "," + Integer.toString(height);

   Gdx.app.log("MJF", "Resolution changed " + resolution);

}

 

In the resize() method we dispose of and recreate our SpriteBatch().  This is because when you change the orientation of the devices from landscape to portrait or vice versa you invalidate the sprite batch, it is now the wrong size for your device.  Therefore in the resize() call, we recreate the SpriteBatch structure.

Programming , , ,




New book Game Development with Three.js

26. October 2013

Today where I live the weather is absolutely abysmal outside right now ( think English weather, but colder ) and I have very little desire to work on any of my own projects, so I took to Safari to see what new books where released.  There’s a new title Game Development with Three.js that was just released today. ( Safari Link ).  I have long been interested in learning Three.js so I decided to check it out.  If you’ve never heard of it, Three.js NewImage s probably the most popular 3D library for WebGL development.  It provides a large range of functionality including fallback renderers if WebGL is missing, scene graph, animation, light, materials, shaders, primitives and even object loaders for most of the popular 3D applications.

 

So today I decided to jump in and read Game Development with Three.js.  First things first… this book is short, very short.  Just over 100 pages actually.  On the other hand, its reasonably cheap at 10$ for the Kindle version.  The print version is a slightly more “rip-offish” $25, but don’t hold that against the author… it’s the way Packt prices books… my own was priced at $17 for the Kindle version and a whopping $50 for the print version.  Apparently Packt wants to sell e-books…  always, back to the book.

 

It’s short, but remarkably concise.  Here is the table of contents:

  • Preface
  • Hello, Three.js
  • Building a world
  • Exploring and Interacting
  • Adding Detail
  • Design and Development

 

The first chapter is the obvious introductory chapter.  Setting up a development environment, configuring Three.js and simple introductory sample.  Where needed there are appropriate and useful graphics, such as visualizing the differences between orthographic and perspective rendering.  Chapter two is probably the meat of the book, it’s a crash course in Three.js, introducing geometry, lighting types, rendering etc.  The main sample from the chapter creates a cityscape using primitives.

 

NewImage

 

It’s a pretty clever example to use when just working with raw primitives.  One thing the book does well is graphical tables illustrating concepts.  Here for example is a subsection of the part on the various shading options available in Three.js:

 

NewImage

 

Its effective, easy to grok and clean.

 

The third chapter starts down the “game” part of the book, covering the non-Three.js aspects of book.  That includes keyboard and mouse handling, mouse hit detection ( ray casting ) and starts on a simple voxel based first person shooter.  This is where you create the skeleton of a game such as the game loop, a simple text based map format, movement, collision and bullets.

 

Chapter four is all about fleshing out the first person shooter, such as loading assets from 3D modelling applications, simple animation, particle systems, sound ( an experimental aspect of Three.js ) and rendering effects/post processing.  

 

The fifth chapter is a hodgepodge of topics such as optimization, network usage, level of detail, JavaScript best practices, etc.

 

So, what did I think of the book?  Well for my needs, a rainy afternoon time killer that introduces Three.js it did exactly that. There is a surprising amount of information jammed into just over 100 pages.  That said, for 100 page book, they left a lot out as well.  If you’ve got no prior game programming experience and need concepts like the game loop, coordinate systems, or general terms ( like UV mapping or texturing ) explained to you you should look elsewhere.  The books coverage of most topics simply isn’t that deep.  Additionally there are a few things that are absent or only just briefly covered, such as shader programming which I think is important enough to to probably merit an entire chapter of it’s own.  It does however present a complete, if simple, game to learn from, so that is certainly useful for beginners.  If you are somewhat experienced with game development and want a crash course in Three.js, this book is a very good reading… especially on a rainy day.

 




LibGDX Tutorial 5: Handling Input–Touch and gestures

24. October 2013

 

In the previous tutorial we looked at handling mouse and keyboard events, both event driven and polled.  Now we will look at how touch works.  To follow along at this point you need to have a touch enabled device ( multi-touch with a mouse is tricky to say the least! ) although all the code will work in Desktop and HTML targets, you simply wont be able to test it.  Let’s jump right in with an example.  This example shows how to handle multiple simultaneous touches:

 

Multitouch

 

package com.gamefromscratch;

import java.util.HashMap;
import java.util.Map;

import com.badlogic.gdx.ApplicationListener;
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.InputProcessor;
import com.badlogic.gdx.graphics.Color;
import com.badlogic.gdx.graphics.GL10;
import com.badlogic.gdx.graphics.g2d.BitmapFont;
import com.badlogic.gdx.graphics.g2d.BitmapFont.TextBounds;
import com.badlogic.gdx.graphics.g2d.SpriteBatch;

public class InputDemo2 implements ApplicationListener, InputProcessor {
    private SpriteBatch batch;
    private BitmapFont font;
    private String message = "Touch something already!";
    private int w,h;
    
    class TouchInfo {
        public float touchX = 0;
        public float touchY = 0;
        public boolean touched = false;
    }
    
    private Map<Integer,TouchInfo> touches = new HashMap<Integer,TouchInfo>();
    
    @Override
    public void create() {        
        batch = new SpriteBatch();    
        font = new BitmapFont(Gdx.files.internal("data/arial-15.fnt"),false);
        font.setColor(Color.RED);
        w = Gdx.graphics.getWidth();
        h = Gdx.graphics.getHeight();
        Gdx.input.setInputProcessor(this);
        for(int i = 0; i < 5; i++){
            touches.put(i, new TouchInfo());
        }
    }

    @Override
    public void dispose() {
        batch.dispose();
        font.dispose();
    }

    @Override
    public void render() {        
        Gdx.gl.glClearColor(1, 1, 1, 1);
        Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
        
        batch.begin();
        
        message = "";
        for(int i = 0; i < 5; i++){
            if(touches.get(i).touched)
                message += "Finger:" + Integer.toString(i) + "touch at:" +
                        Float.toString(touches.get(i).touchX) +
                        "," +
                        Float.toString(touches.get(i).touchY) +
                        "\n";
                                
        }
        TextBounds tb = font.getBounds(message);
        float x = w/2 - tb.width/2;
        float y = h/2 + tb.height/2;
        font.drawMultiLine(batch, message, x, y);
        
        batch.end();
    }

    @Override
    public void resize(int width, int height) {
    }

    @Override
    public void pause() {
    }

    @Override
    public void resume() {
    }

    @Override
    public boolean keyDown(int keycode) {
        // TODO Auto-generated method stub
        return false;
    }

    @Override
    public boolean keyUp(int keycode) {
        // TODO Auto-generated method stub
        return false;
    }

    @Override
    public boolean keyTyped(char character) {
        // TODO Auto-generated method stub
        return false;
    }

    @Override
    public boolean touchDown(int screenX, int screenY, int pointer, int button) {
        if(pointer < 5){
            touches.get(pointer).touchX = screenX;
            touches.get(pointer).touchY = screenX;
            touches.get(pointer).touched = true;
        }
        return true;
    }

    @Override
    public boolean touchUp(int screenX, int screenY, int pointer, int button) {
        if(pointer < 5){
            touches.get(pointer).touchX = 0;
            touches.get(pointer).touchY = 0;
            touches.get(pointer).touched = false;
        }
        return true;
    }

    @Override
    public boolean touchDragged(int screenX, int screenY, int pointer) {
        // TODO Auto-generated method stub
        return false;
    }

    @Override
    public boolean mouseMoved(int screenX, int screenY) {
        // TODO Auto-generated method stub
        return false;
    }

    @Override
    public boolean scrolled(int amount) {
        // TODO Auto-generated method stub
        return false;
    }
}

Now when you run it, diagnostic information will be displayed for each finger you are touching with:

photo

 

For each finger it displays the coordinates the finger is touched up, up to a total of 5 fingers. 

So what exactly is going on here?  We create a simple class TouchInfo for holding touch details: if its touched, the X and Y coordinates.  We then create a HashMap of touches with an Integer as the key and a TouchInfo class as the data.  The key will be the index of the finger touching with.  The logic is actually in the touchDown and touchUp event handlers.  On touch down we update the touches map at the index represented by the value pointer.  As you may recall from the previous tutorial the value pointer represents which finger is currently pressed.  When the finger is released, touchUp is fired and we simply clear the touch entry at that location.  Finally in render() we loop through the touches map and display the ones that are touched and where.

At the end of the day, touches is basically identical to mouse clicks, except you can have multiple of them and there are no buttons.  Oh I suppose I should mention that the 5 touch limit in this example was just an number I picked arbitrarily.  LibGDX supports up to 20 touches although no devices do.  The iPad for example can track up to 11, the iPhone tracks up to 5, while the HTC One tracks 10.  On your Google phone you can track how many touches it supports using this app.  That said, 5 is a pretty safe and reasonable number… heck, I dont think I’ve ever used more than 4 on any device.

 

Touch gestures

 

There are a number of common gestures that have become ubiquitous in the mobile world.  Things like pinch to zoom, or flick/fling and long press have become the norm.  Fortunately GDX supports all of these things out of the box.  Let’s jump right into a simple demo:

package com.gamefromscratch;


import com.badlogic.gdx.ApplicationListener;
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.graphics.Color;
import com.badlogic.gdx.graphics.GL10;
import com.badlogic.gdx.graphics.g2d.BitmapFont;
import com.badlogic.gdx.graphics.g2d.BitmapFont.TextBounds;
import com.badlogic.gdx.graphics.g2d.SpriteBatch;
import com.badlogic.gdx.input.GestureDetector;
import com.badlogic.gdx.input.GestureDetector.GestureListener;
import com.badlogic.gdx.math.Vector2;

public class InputDemo3 implements ApplicationListener, GestureListener {
    private SpriteBatch batch;
    private BitmapFont font;
    private String message = "No gesture performed yet";
    private int w,h;

    
    @Override
    public void create() {        
        batch = new SpriteBatch();    
        font = new BitmapFont(Gdx.files.internal("data/arial-15.fnt"),false);
        font.setColor(Color.RED);
        w = Gdx.graphics.getWidth();
        h = Gdx.graphics.getHeight();
        
        GestureDetector gd = new GestureDetector(this);
        Gdx.input.setInputProcessor(gd);
    }

    @Override
    public void dispose() {
        batch.dispose();
        font.dispose();
    }

    @Override
    public void render() {        
        Gdx.gl.glClearColor(1, 1, 1, 1);
        Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
        
        batch.begin();
        
        TextBounds tb = font.getBounds(message);
        float x = w/2 - tb.width/2;
        float y = h/2 + tb.height/2;
        
        font.drawMultiLine(batch, message, x, y);
        
        batch.end();
    }

    @Override
    public void resize(int width, int height) {
    }

    @Override
    public void pause() {
    }

    @Override
    public void resume() {
    }

    @Override
    public boolean touchDown(float x, float y, int pointer, int button) {
        // TODO Auto-generated method stub
        return true;
    }

    @Override
    public boolean tap(float x, float y, int count, int button) {
        message = "Tap performed, finger" + Integer.toString(button);
        return true;
    }

    @Override
    public boolean longPress(float x, float y) {
        message = "Long press performed";
        return true;
    }

    @Override
    public boolean fling(float velocityX, float velocityY, int button) {
        message = "Fling performed, velocity:" + Float.toString(velocityX) +
                "," + Float.toString(velocityY);
        return true;
    }

    @Override
    public boolean pan(float x, float y, float deltaX, float deltaY) {
        message = "Pan performed, delta:" + Float.toString(deltaX) +
                "," + Float.toString(deltaY);
        return true;
    }

    @Override
    public boolean zoom(float initialDistance, float distance) {
        message = "Zoom performed, initial Distance:" + Float.toString(initialDistance) +
                " Distance: " + Float.toString(distance);
        return true;
    }

    @Override
    public boolean pinch(Vector2 initialPointer1, Vector2 initialPointer2,
            Vector2 pointer1, Vector2 pointer2) {
        message = "Pinch performed";
        return true;
    }

}

 

If you run it, as you perform various gestures they will be displayed centered on the screen.  Supported gestures include tap, fling(flick), pinch ( two fingers moving closer together ), zoom ( two fingers moving apart ), pan ( one finger hold and slide ) and long press ( tap and hold ) as well as good ole fashion touch.

Just like we implemented InputProcessor to handle touch, mouse and keyboard events, we implement the GestureListener to accept gesture events from LibGDX.  In create() you create a GestureDetector using your GestureListener and once again you register it using Gdx.input.setInputProcessor().  Each different gesture triggers the corresponding even in your GestureListener.  In each, we simple update the displayed message to reflect the most recently performed event.

One important concept we didn’t cover here is configuring your GestureDetector…  how do you determine how long a long touch is, or how long must elapse before a drag becomes a flick?  The simple answer is, using the GestureDetector constructor.  You can read more about it here.

 

Handling multiple InputProcessors

 

So, what if you wanted to handle gestures AND keyboard events… what then?  Fortunately the answer is quite simple, instead of passing setInputProcessor an InputProcessor or GestureDetector, you instead pass in an InputMultiplexer.  Like so:

package com.gamefromscratch;


import com.badlogic.gdx.ApplicationListener;
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.InputMultiplexer;
import com.badlogic.gdx.graphics.Color;
import com.badlogic.gdx.graphics.GL10;
import com.badlogic.gdx.graphics.g2d.BitmapFont;
import com.badlogic.gdx.graphics.g2d.BitmapFont.TextBounds;
import com.badlogic.gdx.graphics.g2d.SpriteBatch;
import com.badlogic.gdx.input.GestureDetector;
import com.badlogic.gdx.input.GestureDetector.GestureListener;
import com.badlogic.gdx.InputProcessor;
import com.badlogic.gdx.math.Vector2;

public class InputDemo4 implements ApplicationListener, GestureListener, InputProcessor {
    private SpriteBatch batch;
    private BitmapFont font;
    private String message = "No gesture performed yet";
    private int w,h;

    
    @Override
    public void create() {        
        batch = new SpriteBatch();    
        font = new BitmapFont(Gdx.files.internal("data/arial-15.fnt"),false);
        font.setColor(Color.RED);
        w = Gdx.graphics.getWidth();
        h = Gdx.graphics.getHeight();
        
        InputMultiplexer im = new InputMultiplexer();
        GestureDetector gd = new GestureDetector(this);
        im.addProcessor(gd);
        im.addProcessor(this);
        
        
        Gdx.input.setInputProcessor(im);
    }

    @Override
    public void dispose() {
        batch.dispose();
        font.dispose();
    }

    @Override
    public void render() {        
        Gdx.gl.glClearColor(1, 1, 1, 1);
        Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
        
        batch.begin();
        
        TextBounds tb = font.getBounds(message);
        float x = w/2 - tb.width/2;
        float y = h/2 + tb.height/2;
        
        font.drawMultiLine(batch, message, x, y);
        
        batch.end();
    }

    @Override
    public void resize(int width, int height) {
    }

    @Override
    public void pause() {
    }

    @Override
    public void resume() {
    }

    @Override
    public boolean touchDown(float x, float y, int pointer, int button) {
        message = "Touch down!";
        Gdx.app.log("INFO", message);
        return true;
    }

    @Override
    public boolean tap(float x, float y, int count, int button) {
        message = "Tap performed, finger" + Integer.toString(button);
        Gdx.app.log("INFO", message);
        return false;
    }

    @Override
    public boolean longPress(float x, float y) {
        message = "Long press performed";
        Gdx.app.log("INFO", message);
        return true;
    }

    @Override
    public boolean fling(float velocityX, float velocityY, int button) {
        message = "Fling performed, velocity:" + Float.toString(velocityX) +
                "," + Float.toString(velocityY);
        Gdx.app.log("INFO", message);
        return true;
    }

    @Override
    public boolean pan(float x, float y, float deltaX, float deltaY) {
        message = "Pan performed, delta:" + Float.toString(deltaX) +
                "," + Float.toString(deltaY);
        Gdx.app.log("INFO", message);
        return true;
    }

    @Override
    public boolean zoom(float initialDistance, float distance) {
        message = "Zoom performed, initial Distance:" + Float.toString(initialDistance) +
                " Distance: " + Float.toString(distance);
        Gdx.app.log("INFO", message);
        return true;
    }

    @Override
    public boolean pinch(Vector2 initialPointer1, Vector2 initialPointer2,
            Vector2 pointer1, Vector2 pointer2) {
        message = "Pinch performed";
        Gdx.app.log("INFO", message);

        return true;
    }

    @Override
    public boolean keyDown(int keycode) {
        message = "Key Down";
        Gdx.app.log("INFO", message);
        return true;
    }

    @Override
    public boolean keyUp(int keycode) {
        message = "Key up";
        Gdx.app.log("INFO", message);
        return true;
    }

    @Override
    public boolean keyTyped(char character) {
        message = "Key typed";
        Gdx.app.log("INFO", message);
        return true;
    }

    @Override
    public boolean touchDown(int screenX, int screenY, int pointer, int button) {
        message = "Touch Down";
        Gdx.app.log("INFO", message);

        return false;
    }

    @Override
    public boolean touchUp(int screenX, int screenY, int pointer, int button) {
        message = "Touch up";
        Gdx.app.log("INFO", message);
        return false;
    }

    @Override
    public boolean touchDragged(int screenX, int screenY, int pointer) {
        message = "Touch Dragged";
        Gdx.app.log("INFO", message);
        return false;
    }

    @Override
    public boolean mouseMoved(int screenX, int screenY) {
        message = "Mouse moved";
        Gdx.app.log("INFO", message);
        return false;
    }

    @Override
    public boolean scrolled(int amount) {
        message = "Scrolled";
        Gdx.app.log("INFO", message);
        return false;
    }

}

Do to the fact multiple events can be fired off at once, in addition to printing them on screen, we also log them using Gdx.app.log().  You can see logged events in the LogCat window in Eclipse:

image

There is also a program called ddms ( its a BAT script on windows ) in the android-sdk tools folder that will display the same information.

image

 

So, that’s what log() does… now back to the code.  The key part here is:

InputMultiplexer im = new InputMultiplexer();
GestureDetector gd = new GestureDetector(this);
im.addProcessor(gd);
im.addProcessor(this);

Gdx.input.setInputProcessor(im);

Essentially you create the multiplexer, then add both the InputProcessor and the GestureDetector to it via addProcessor(), then it is the multiplexer that is passed to setInputProcessor().  Otherwise the code works pretty much exactly the same.  There is two things of critical importance here.  First, the order that processors are added to the multiplexor seems to determine the order that they will have first crack at events that occurred.  Next, and this one is super important, in event handlers if you return true that means that the event is handled.  Think about that for a second, its an important concept to grasp.  While something like a touch up or down event is pretty straight forward, a say… pinch event is not.   In fact, a pinch event is composed of a number of other events, including multiple touch events.  However if you return true out of say, the touchDown event, that event will not bubble through to the gesture detector.  Therefore if you are supporting multi touch, be sure to return false from the more atomic events such as touchDown and TouchUp, so they still get a crack at handling those events!

Programming , ,




Autodesk release Maya LT Extension 1. Polygon limits more than doubled

23. October 2013

 

I just received the following information from Autodesk about Maya LT, an indie focused version of the Maya 3D software we covered back in August.  Today’s release adds some new functionality and addresses one of the biggest complaints, the low polygon limit.

 

Autodesk Releases Maya LT Extension 1 for Indie and Mobile Game Developers


Advances asset export and 3D modeling through enhanced interoperability with Unity3D and increased polygon count


Today Autodesk launched the first extension for Autodesk Maya LT - the company's recently released 3D modeling and animation tool designed specifically for independent and mobile game developers. With new features such as improved interoperability with Unity3D, an increased polygon count and more, Maya LT Extension 1 simplifies the export of 3D content into artists' desired game engines and expands 3D modeling capabilities. Extension 1 is available today to as a free download for customers on subscription and pay-as-you-go plans.


Key features in Maya LT Extension 1 include:
- Improved Interoperability with Unity: A new “Send to Unity” workflow allows artists to export 3D assets with unlimited polygon counts from Maya LT directly into the asset folder of a Unity project.
- Increased Polygon Count for Export: Export high-resolution models or scenes up to 65,000 polygons in the Autodesk FBX asset exchange format to the desired game engine.
- New Retopology Toolset: First integrated in Maya 2014 and now part of Maya LT, NEX modeling technology streamlines the retopology workflow. Artists can optimize meshes for cleaner deformations and better performance using a single toolset within Maya LT.
- Advanced Booleans: Maya LT now employs a robust and efficient new library for faster and more reliable Boolean operations on polygon geometry.
- FBX Export Improvements: Advanced support for exporting accurate geometry normals (binormals) facilitates consistent surface shading when assets are rendered in-engine.


More information about Maya LT and a free trial of the software are available via http://www.autodesk.com/mayalt and http://area.autodesk.com/mayalt .

 

Is 65K polygons a better limit, or still to low for your games?  It’s certainly an improvement on the old 25K limit.  One of the big flaws of a polygon limit is if you are using Maya as a level design tool, which is nicely solved if you are working in Unity which no longer has an limits.  If you are working in UDK or Project Anarchy on the other hand, there is still a problem. On a somewhat off topic note, I am not sure what I think of the “extension” versioning system.  It makes sense and its nice to see a fast support cycle but there is something about that I find off-putting.

News ,