Using LibGDX in IntelliJ with Gradle

1. December 2013

 

I make no effort to disguise my dislike for Eclipse yet when working with LibGDX Eclipse is the path of least resistance. Fortunately there are other IDEs out there that can work with LibGDX, IntelliJ is my favorite of the options available. 

 

First off, you should be aware that there are currently some key limitations with the Gradle install.  Right now there is no support for GWT ( HTML5 ) or iOS projects.  So if those targets are important you should stick with Eclipse for now.

 

OK, let’s jump right in.  There are a couple things you need to have installed already.  First is a Java JDK, the second is the Android SDK ( not the ADT version! ).  I am not going into detail about how to install these, I simply assume you already have.  If you haven’t already, be sure to download and install IntelliJ IDEA, the free version has everything you need.

 

Installing the template

 

Now you need the LibGDX Gradle Template.  If you have git installed and configured ( you should! ) you can clone the template.  Open a command prompt or teminal, change to the directory you want your project to reside and run the command:

git clone https://github.com/libgdx/libgdx-gradle-template.git

It should run like so:

image

 

Alternately, you can download from github as a zip archive, download and then extract the contents to where you want the project to reside.

 

Now we will install gradle, this process is entirely automated.  In an elevated permission command prompt, change into the directory you just cloned/copied.  Now run the command:

gradlew clean

This will take a bit of time as it downloads and configured gradle and all the dependencies on your project.  Ideally you will see:

image

Now let’s make sure things worked right.  Run the command

gradlew desktop:run

 

Assuming everything went right, you should see:

image

 

Now generate a Intellij project.  Run the command

gradlew idea

This will create an IntelliJ project file with a .ipr extension.

image

 

Configuring IntelliJ IDEA

 

Load IntelliJ IDEA.

At the Quick Start Window, select Open Project

image

 

Navigate to the IPR file then select OK.

image

 

Configure the desktop project

 

Your Project should ( after you click the Project Tab ), look like this:

image

 

Let’s configure the desktop project first.  Select desktop in the project view and hit F4 or right click and select Open Module Settings:

image

 

Since you’ve never configured a JDK, you need to do that first.  In the future you shouldn’t have to do this step.  In the resulting dialog, select Project, then New… on the right side:

image

 

Select JDK in the drop down.  In the next dialog, navigate to and select the directory where you installed the Java JDK, then click OK.

image

 

Now that the Desktop project is configured, we need to create a Run or Debug configuration.  In the Run menu, select either Run… or Debug…

image

 

A menu will pop up, select Edit Configurations…

image

 

In the next dialog, click the + icon:

image

 

In the drop down, select Application:

image

 

Now we need to fill out the form.  You can optionally name your configuration, I went with Debug Desktop.  Next select “Use Classpath of module” and select Desktop.  In working directory, choose the assets folder in the Android project.  Click the … button to the right of Main Class and choose DesktopLauncher.  Finally click Apply then Debug.

image

 

If everything went right you should see:

image

 

Configure the Android Project

 

Now lets take a look at configuring the Android project, it’s a very similar process.

Right Click the Android project and select Open Module Settings.

Select Project, New->Android SDK

image

 

Browse to where you installed the Android SDK then click OK:

image

Pick whatever Java version and Android target you want.  Keep in mind, you need to install the SDKs as part of the Android SDK installation process:

image

 

Click OK, then Apply.

 

Now you can create a Debug or Run configuration for Android.  Select Run->Debug… or Run->Run…

Select Edit Configuration…

Click the + Icon, then select Android Application:

image

 

Now configure your run/debug configuration.  Name it, select Android from the Module pull down, pick what you want to target ( run the emulator vs run on a device ).  Finally click apply then Debug.

image

 

Once you’ve created a Run configuration, you can run your various different projects using this pull down:

image

 

 

Renaming your project

 

It is possible there is a much easier way to do this as part of the Gradle process ( I am certainly no Gradle expert! ) but once you have your project running, you probably want to rename it to something you want.  This means altering directories to match your new naming convention.  IntelliJ makes this fairly simple.

 

In the Project setting, select the Gear icon, then disable Compact Empty Middle Packages.

image

 

 

In this case, instead of com.badlogic.gradletest, I want it to be com.gamefromscratch.gradletest.

In the core project, select badlogic, right click and select Refactor->Rename…

image

 

Select Rename All

image

 

Select your new name, then click Refactor:

image

 

Now repeat the process in the Android folder, select and refactor badlogicgames.

image

 

This time select Rename Directory

image

Once again select the new value you want then click Refactor.

 

Finally locate AndroidManifest.xml and update the package name there as well

image

 

 

A world of note, refactoring wont update your project Run Configuration.  If you rename the project after creating a Run Configuration, you will see:

image

 

This is easily fixed, simply select Run->Edit Configurations:

image

 

Select your Desktop configuration and updated the Main Class to your newly renamed value:

image

 

… now you are up and running in IntelliJ.  It’s a bit more work, but certainly worth it in the end, especially if you dont need GWT or iOS support.  Hopefully those get added soon!

Programming , , ,




LibGDX Tutorial 9: Scene2D Part 1

27. November 2013

 

In this section we are going to take a look at the Scene2D library.  The first thing you need to be aware of is scene2d is entirely optional!  If you don’t want to use it, don’t.  All the other parts, except the bits built over Scene2D, will continue to work just fine.  Additionally if you want to use Scene2D for parts of your game ( such as a HUD overlain over your game ) you can.

 

So, what is scene2D?  In a nutshell, it’s a 2D scene graph.  So you might be asking “what’s a scene graph?”.  Good Question!  Essentially a scene graph is a data structure for storing the stuff in your world.  So if your game world is composed of dozens or hundreds of sprites, those sprites are stored in the scene graph.  In addition to holding the contents of your world, Scene2D provides a number of functions that it performs on that data.  Things such as hit detection, creating hierarchies between game objects, routing input, creating actions for manipulating a node over time, etc.

 

You can think of Scene2D as a higher level framework for creating a game built over top of the LibGDX library.  Oh it also is used to provide a very good UI widget library… something we will discuss later.

 

The object design of Scene2D is built around the metaphor of a play ( or at least I assume it is ).  At the top of the hierarchy you have the stage.  This is where your play (game) will take place.  The Stage in turn contains a Viewport… think of this like, um… camera recording the play ( or the view point of someone in the audience ).  The next major abstraction is the Actor, which is what fills the stage with… stuff.  This name is a bit misleading, as Actor doesn’t necessarily mean a visible actor on stage.  Actors could also include the guy running the lighting, a piece of scenery on stage, etc.  Basically actors are the stuff that make up your game.  So basically, you split your game up into logical Scenes ( be it screens, stages, levels, whatever makes sense ) composed of Actors.  Again, if the metaphor doesn’t fit your game, you don’t need to use Scene2D.

 

So, that’s the idea behind the design, let’s look at a more practical example.  We are simply going to create a scene with a single stage and add a single actor to it.

It’s important to be using the most current version of LibGDX, as recent changes to Batch/SpriteBatch will result in the following code not working!

package com.gamefromscratch;

import com.badlogic.gdx.ApplicationListener;
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.graphics.GL10;
import com.badlogic.gdx.graphics.Texture;
import com.badlogic.gdx.graphics.g2d.Batch;
import com.badlogic.gdx.scenes.scene2d.Actor;
import com.badlogic.gdx.scenes.scene2d.Stage;

public class SceneDemo implements ApplicationListener {
    
    public class MyActor extends Actor {
        Texture texture = new Texture(Gdx.files.internal("data/jet.png"));
        @Override
        public void draw(Batch batch, float alpha){
            batch.draw(texture,0,0);
        }
    }
    
    private Stage stage;
    
    @Override
    public void create() {        
        stage = new Stage(Gdx.graphics.getWidth(),Gdx.graphics.getHeight(),true);
        
        MyActor myActor = new MyActor();
        stage.addActor(myActor);
    }

    @Override
    public void dispose() {
        stage.dispose();
    }

    @Override
    public void render() {    
        Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
        stage.draw();
    }

    @Override
    public void resize(int width, int height) {
    }

    @Override
    public void pause() {
    }

    @Override
    public void resume() {
    }
}

 

 

Also note, I also added the jet image I used in earlier examples to the assets folder as a file named jet.png.  See the earlier tutorials if you are unsure of how to do this.  When you run the application you should see:

image

 

As you can see it’s fairly simple process working with Stage2D.  We create an embedded Actor derived class named MyActor.  MyActor simply loads it’s own texture from file.  The key part is the draw() method.  This will be called every frame by the stage containing the actor.  It is here you draw the actor to the stage using the provided Batch.  Batch is the interface that SpriteBatch we saw earlier implements and is responsible for batching up drawing calls to OpenGL.  In this example we simply draw our Texture to the batch at the location 0,0.  Your actor could just as easily be programmatically generated, from a spritesheet, etc.  One thing I should point out here, this example is for brevity, in a real world scenario you would want to manage things differently, as every MyActor would leak it’s Texture when it is destroyed!

 

In our applications create() method we create our stage passing in the app resolution.  The true value indicates that we want to preserve our devices aspect ratio.  Once our stage is created, we create an instance of MyActor and add it to the stage with a call to stage.addActor().  Next up in the render() function, we clear the screen then draw the stage by calling the draw() method.  This in turn calls the draw() method of every actor the stage contains.  Finally you may notice that we dispose of stage in our app’s dispose() call to prevent a leak.

 

So, that is the basic anatomy of a Scene2D based application.  One thing I didn’t touch upon is actually having actors do something or how you would control one.  The basic process is remarkably simple with a couple potential gotchas.  Let’s look at an updated version of this code, the changes are highlighted:

 

package com.gamefromscratch;

import com.badlogic.gdx.ApplicationListener;
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.graphics.GL10;
import com.badlogic.gdx.graphics.Texture;
import com.badlogic.gdx.graphics.g2d.Batch;
import com.badlogic.gdx.scenes.scene2d.Actor;
import com.badlogic.gdx.scenes.scene2d.InputEvent;
import com.badlogic.gdx.scenes.scene2d.InputListener;
import com.badlogic.gdx.scenes.scene2d.Stage;
import com.badlogic.gdx.scenes.scene2d.Touchable;

public class SceneDemo2 implements ApplicationListener {
    
    public class MyActor extends Actor {
        Texture texture = new Texture(Gdx.files.internal("data/jet.png"));
        float actorX = 0, actorY = 0;
        public boolean started = false;

        public MyActor(){
            setBounds(actorX,actorY,texture.getWidth(),texture.getHeight());
            addListener(new InputListener(){
                public boolean touchDown (InputEvent event, float x, float y, int pointer, int button) {
                    ((MyActor)event.getTarget()).started = true;
                    return true;
                }
            });
        }
        
        
        @Override
        public void draw(Batch batch, float alpha){
            batch.draw(texture,actorX,actorY);
        }
        
        @Override
        public void act(float delta){
            if(started){
                actorX+=5;
            }
        }
    }
    
    private Stage stage;
    
    @Override
    public void create() {        
        stage = new Stage();
        Gdx.input.setInputProcessor(stage);
        
        MyActor myActor = new MyActor();
        myActor.setTouchable(Touchable.enabled);
        stage.addActor(myActor);
    }

    @Override
    public void dispose() {
    }

    @Override
public void render() {    
    Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
    stage.act(Gdx.graphics.getDeltaTime());
    stage.draw();
}

    @Override
    public void resize(int width, int height) {
    }

    @Override
    public void pause() {
    }

    @Override
    public void resume() {
    }
}

 

When you run it you will see:

scene2

Click the jet sprite and it’s action will start.  Let’s take a closer look at the code now.

 

Let’s start with the changes we made to the MyActor class.  The most obvious change you will see is the addition of a constructor.  I did this so I could add an event listener to our actor, that works a lot like event listeners we worked with earlier when dealing with input.  This one however passes an InputEvent class as a parameter which contains the method getTarget(), which is the Actor that was touched.  We simply cast it to a MyActor object and set the started boolean to true.  One other critical thing you may notice is the setBounds() call.  This call is very important!  If you inherit from Actor, you need to set the bounds or it will not be click/touch-able!  This particular gotcha cost me about an hour of my life.  Simply set the bounds to match the texture your Actor contains.  Another thing you may notice is a lot of the examples and documentation on Actor event handling is currently out of date and there were some breaking changes in the past!

 

Other than the constructor, the other major change we made to MyActor was the addition of the act() method.  Just like with draw(), there is an act() method that is called on every actor on the stage.  This is where you will update your actor over time.  In many other frameworks, act would instead be called update().  In this case we simply add 5 pixels to the X location of our MyActor each frame.  Of course, we only do this once the started flag has been set.

 

In our create() method, we made a couple small changes.  First we need to register an InputProcessor.  Stage implements one, so you simply pass the stage object to setInputProcessor().  As you saw earlier, the stage handles calling the InputListeners of all the child actors.  We also set the actor as Touchable, although I do believe this is the default behavior.  If you want to make it so an actor cannot be touched/clicked, pass in Touchable.disabled instead.  The only other change is in the render() method, we now call stage.act() passing in the elapsed time since the previous frame.  This is is what causes the various actors to have their act() function called.

 

Scene2D is a pretty big subject, so I will be dealing with it over several parts.

Programming , ,




LibGDX Tutorial 8: Audio

19. November 2013

 

This section is going to be rather small because, well frankly, LibGDX makes audio incredibly easy.  Unlike previous tutorials, this one is going to contain a number of snippets.  LibGDX supports 3 audio formats: ogg, mp3 and wav.  MP3 is a format that is mired in legal issues, while WAV is a rather large ( file size ) format, leaving OGG as often the best choice.  That said, when it comes to being broadly supported ( especially in browsers ), Ogg can have issues.  This of course is why multiple formats exist and continue to be used!

Playing Sound Effects

 

Loading a sound file is trivial.  Like you did earlier with fonts or graphic formats, you need to add the files to assets folder in the android project folder.  Like earlier, I followed convention and put everything in the data subdirectory like so:

wyabq5gw

 

As you can see, I added a file of each format, mp3.mp3, ogg.ogg and wav.wav.

 

Loading any of these files is incredibly simple:

Sound wavSound = Gdx.audio.newSound(Gdx.files.internal("data/wav.wav"));
Sound oggSound = Gdx.audio.newSound(Gdx.files.internal("data/ogg.ogg"));
Sound mp3Sound = Gdx.audio.newSound(Gdx.files.internal("data/mp3.mp3"));

This returns a Sound object using the specified file name.  Once you have a Sound, playing is trivial:

wavSound.play();

You also have the option of setting the play volume when calling play, such as:

oggSound.play(0.5f);

This plays the oggSound object at 50% volume for example.

 

In addition to play() you can also loop() to well, loop a Sound continuously.  When you play a sound it returns an id that you can use to interact with the sound.  Consider:

long id = mp3Sound.loop();
Timer.schedule(new Task(){
   @Override
   public void run(){
      mp3Sound.stop(id);
      }
   }, 5.0f);

 

Here you start an mp3 file looping, which returns an id value.  Then we schedule a task to run 5 seconds later to stop the sound from playing.  Notice how in the call to stop() an id is passed?  This allows you to manage a particular instance of a sound playing.  This is because you can play the same Sound object a number of times simultaneously.  One important thing to be aware of, Sound objects are a managed resource, so when you are done with them, dispose().

wavSound.dispose();
oggSound.dispose();
mp3Sound.dispose();

 

Once you have a sound, there are a number of manipulations you can do.  You can alter the pitch:

long id = wavSound.play();
wavSound.setPitch(id,0.5f);

 

The first parameter is the sound id to alter, the second value is the new pitch ( speed ).  The value should be > 0.5 and < 2.0.  Less than 1 is slower, greater than 1 is faster.

You can alter the volume:

long id = wavSound.play();
wavSound.setVolume(id,1.0f);

 

Once again, you pass the id of the sound, as well as the volume to play at.  A value of 0 is silent, while 1 is full volume.  As well you can set the Pan ( stereo position ), like so:

long id = wavSound.play();
wavSound.setPan(id, 1f, 1f);

In this case the parameters are the sound file id, the pan value ( 1 is full left, 0 is center, –1 is full right ) as well as the volume.  You can also specify the pitch, pan and volume when calling play() or loop().  One important note, none of these methods are guaranteed to work on the WebGL/HTML5 backend.  Additionally file format support varies between browsers ( and is very annoying! ).

 

Streaming music

 

In addition to playing sound effects, LibGDX also offers support for playing music ( or longer duration sound effects! ).  The big difference is LibGDX will stream the effect in this case, greatly lowering the demands on memory. This is done using the Music class.  Fortunately it’s remarkably simple:

Music mp3Music = Gdx.audio.newMusic(Gdx.files.internal("data/RideOfTheValkyries.mp3"));
mp3Music.play();

 

And that’s all you need to stream an audio file.  The controls are a bit different for a Music file.  First off, there is no id, so this means you can play multiple instances of a single Music file at once.  Second, there are a series of VCR style control options.  Here is a rather impractical example of playing a Music file:

 

Music mp3Music = Gdx.audio.newMusic(Gdx.files.internal("data/RideOfTheValkyries.mp3"));
mp3Music.play();
mp3Music.setVolume(1.0f);
mp3Music.pause();
mp3Music.stop();
mp3Music.play();
Gdx.app.log("SONG",Float.toString(mp3Music.getPosition()));

 

After our Music file is loaded, we start it, then set the volume to 100%.  Next we pause, then stop, then play our music file again.  As you can see from the log() call, you can get the current playback position of the Music object by calling getPosition().  This returns the current elapsed time into the song in seconds.  You may be wondering exactly what the difference is between pause() and stop()?  Calling play() after pause() will continue playing the song at the current position.  Calling play() after calling stop() will restart the song.

Once again, Music is a managed resource, so you need to dispose() it when done or you will leak memory.

 

Recording and playing PCM audio

 

LibGDX also has the ability to work at a lower level using raw PCM data.  Basically this is a short (16bit) or float (32bit) array of values composing the wavform to play back.  This allows you to create audio effects programmatically.  You can also record audio into PCM form.  Consider the following example:

AudioDevice playbackDevice = Gdx.audio.newAudioDevice(44100, true);
AudioRecorder recordingDevice = Gdx.audio.newAudioRecorder(44100, true);
short[] samples = new short[44100 * 10]; // 10 seconds mono audio
recordingDevice.read(samples, 0, samples.length);
playbackDevice.writeSamples(samples, 0, samples.length);
recordingDevice.dispose();
playbackDevice.dispose();

 

This example creates an AudioDevice and AudioRecorder.  In both functions you pass the desired sampling rate ( 44.1khz is CD audio quality ) as well as a bool representing if you want mono ( single channel ) or stereo ( left/right ) audio.  Next we create an array to record our audio into.  In this example, we want 10 seconds worth of audio at the 44.1khz sampling rate.  We then record the audio by calling the read() method of the AudioRecorder object.  We pass in the array to write to, the offset within the array to start at and finally the total sample length.  We then playback the audio we just recording by calling writeSamples, using the exact same parameters.  Both AudioDevice and AudioRecorder are managed resources and thus need to be disposed.

 

There are a few very important things to be aware of.  First, PCM audio is NOT available on HTML5.  Second, if you are recording in Stereo, you need to double the size of your array.  The data in the array for a stereo waveform is interleaved.  For example, the first byte in the array is the very first float of the left sound channel, then the next float is the first value in the right channel, the next float is the second float of the left sound channel, and so on.

Programming , ,




LibGDX Tutorial 7: Camera basics

6. November 2013

Now we are going to look quickly at using a camera, something we haven’t used in any of the prior tutorials.  Using a camera has a couple of advantages.  It gives you an easier way of dealing with device resolution as LibGDX will scale the results up to match your device resolution.  It also makes moving the view around when your scene is larger than a single screen.  That is exactly what we are going to do in the code example below.

 

I am using a large ( 2048x1024 ) image that I obtained here.

 

Alright, now the code:

package com.gamefromscratch;

 

import com.badlogic.gdx.ApplicationListener;

import com.badlogic.gdx.Gdx;

import com.badlogic.gdx.graphics.GL10;

import com.badlogic.gdx.graphics.OrthographicCamera;

import com.badlogic.gdx.graphics.Texture;

import com.badlogic.gdx.graphics.Texture.TextureFilter;

import com.badlogic.gdx.graphics.g2d.Sprite;

import com.badlogic.gdx.graphics.g2d.SpriteBatch;

import com.badlogic.gdx.input.GestureDetector;

import com.badlogic.gdx.input.GestureDetector.GestureListener;

import com.badlogic.gdx.math.Vector2;

 

public class CameraDemo implements ApplicationListener, GestureListener {

private OrthographicCamera camera;

private SpriteBatch batch;

private Texture texture;

private Sprite sprite;

 

@Override

public void create() {

   camera = new OrthographicCamera(1280, 720);

 

   batch = new SpriteBatch();

 

   texture = new Texture(Gdx.files.internal("data/Toronto2048wide.jpg"));

   texture.setFilter(TextureFilter.Linear, TextureFilter.Linear);

 

   sprite = new Sprite(texture);

   sprite.setOrigin(0,0);

   sprite.setPosition(-sprite.getWidth()/2,-sprite.getHeight()/2);

 

   Gdx.input.setInputProcessor(new GestureDetector(this));

}

 

@Override

public void dispose() {

   batch.dispose();

   texture.dispose();

}

 

@Override

public void render() {

   Gdx.gl.glClearColor(1, 1, 1, 1);

   Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT);

 

   batch.setProjectionMatrix(camera.combined);

   batch.begin();

   sprite.draw(batch);

   batch.end();

}

 

@Override

public void resize(int width, int height) {

}

 

@Override

public void pause() {

}

 

@Override

public void resume() {

}

 

@Override

public boolean touchDown(float x, float y, int pointer, int button) {

// TODO Auto-generated method stub

return false;

}

 

@Override

public boolean tap(float x, float y, int count, int button) {

// TODO Auto-generated method stub

return false;

}

 

@Override

public boolean longPress(float x, float y) {

// TODO Auto-generated method stub

return false;

}

 

@Override

public boolean fling(float velocityX, float velocityY, int button) {

// TODO Auto-generated method stub

return false;

}

 

@Override

public boolean pan(float x, float y, float deltaX, float deltaY) {

   // TODO Auto-generated method stub

   camera.translate(deltaX,0);

   camera.update();

   return false;

}

 

@Override

public boolean zoom(float initialDistance, float distance) {

// TODO Auto-generated method stub

return false;

}

 

@Override

public boolean pinch(Vector2 initialPointer1, Vector2 initialPointer2,

Vector2 pointer1, Vector2 pointer2) {

// TODO Auto-generated method stub

return false;

}

}

 

Additionally in Main.java I changed the resolution to 720p like so:

package com.gamefromscratch;

 

import com.badlogic.gdx.backends.lwjgl.LwjglApplication;

import com.badlogic.gdx.backends.lwjgl.LwjglApplicationConfiguration;

 

public class Main {

   public static void main(String[] args) {

      LwjglApplicationConfiguration cfg = new LwjglApplicationConfiguration();

      cfg.title = "camera";

      cfg.useGL20 = false;

      cfg.width = 1280;

      cfg.height = 720;

 

      new LwjglApplication(new CameraDemo(), cfg);

   }

}

When you run it you will see:

 

SS

 

 

 

Other then being an image of my cities skyline, its pan-able. You can swipe left or right to pan the image around.

 

The code is mostly familiar at this point, but the important new line is:

camera = new OrthographicCamera(1280, 720);

This is where we create the camera.  There are two kinds of cameras in LibGDX, Orthographic and Perspective.  Basically an orthographic camera renders what is in the scene exactly the size it is.  A perspective camera on the other hand emulates the way the human eye works, by rendering objects slightly smaller as they get further away.  Here is an example from my Blender tutorial series.

 

Perspective:

Perspective

Orthographic:

Orthographic

 

Notice how the far wing is smaller in the perspective render?  That’s what perspective rendering does for you.  In 2D rendering however, 99 times out of 100 you want to use Orthographic. 

 

The values passed to the constructor are the resolution of the camera, the width and height.  In this particular case I chose to use pixels for my resolution, as I wanted to have the rendering at 1280x720 pixels.  You however do not have to… if you are using physics and want to use real world units for example, you could have gone with meters, or whatever you want.  The key thing is that your aspect ratio is correct.  The rest of the code in create() is about loading our image and positioning it about the origin in the world.  Finally we wire up our gesture handler so we can pan/swipe left and right on the image.

 

The next important call is in render():

batch.setProjectionMatrix(camera.combined);

This ties our LibGDX camera object to the OpenGL renderer.  The OpenGL rendering process depends on a number of matrix to properly translate from the scene or world to screen coordinates during rendering.  camera.combined returns the camera’s view and projection matrixes multiplied together.  If you want more information about the math behind the scenes you can read here.  Of course, the entire point of the Camera classes is so you don’t have to worry about this stuff, so if you find it confusing, don’t sweat it, LibGDX takes care of the math for you.  

Finally in our pan handler ( huh? ) we have the following code:

camera.translate(deltaX,0);

camera.update();

 

You can use translate to move the camera around. Here we move the camera along the X axis by the amount the user swiped. This causes the view of the image to move as the user swipes the screen/pans the mouse. Once you are done modifying the camera, you need to update it. Without calling update() the camera would never move.

There are a number of neat functions in the camera that we don’t use here.  There are functions to look at a point in space, to rotate or even rotate around ( orbit ) a vector.  There are also functions for projecting to and from screen to world space as well as code for ray casting into the scene.  In a straight 2D game though you generally won’t use a lot of this functionality.  We may take a closer look at the camera class later on when we jump to 3D.

Programming , ,




LibGDX Tutorial 6: Motion controls

30. October 2013

In the previous tutorial we looked at handling touch and gesture events.  These days, most mobile devices have very accurate motion detection capabilities, which LibGDX fully supports.  In this example we will look at how to handle motion as well as detect if a device supports certain functionality and to detect which way the device is oriented.

 

This project revolves around a single code example, but there are some configuration steps you need to be aware of.

 

First off, in order to tell LibGDX that you want to use the compass and accelerometer, you need to pass that as part of the configuration in your Android MainActivity.  In the android project locate MainActivity.java and edit it accordingly:

package com.gamefromscratch;

 

import android.os.Bundle;

 

import com.badlogic.gdx.backends.android.AndroidApplication;

import com.badlogic.gdx.backends.android.AndroidApplicationConfiguration;

 

public class MainActivity extends AndroidApplication {

    @Override

    public void onCreate(Bundle savedInstanceState) {

        super.onCreate(savedInstanceState);

        

        AndroidApplicationConfiguration cfg = new AndroidApplicationConfiguration();

        cfg.useGL20 = true;

        cfg.useAccelerometer = true;

        cfg.useCompass = true;

        

        initialize(new MotionDemo(), cfg);

    }

}

 

The meaningful lines are 

cfg.useAccelerometer = true;

and

cfg.useCompass = true;

 

These lines tell LibGDX to enable both.

Next we need to make a couple of changes to your Android manifest.  This is a configuration file of sorts that tells the Android operating system how your application performs and what permissions it requires to run.  You could literally write an entire book about dealing with Android manifests, so if you want more information read here.  The manifest is located at the root of your Android project and is called AndroidManifest.xml.  There are a couple ways you can edit it.  Simply right click AndroidManifest.xml and select Open With->.

ManifestEditAs

 

I personally prefer to simply edit using the Text Editor, but if you want a more guided experience, you can select Android Manifest Editor, which brings up this window:

Java motion android AndroidManifest xml Eclipse Users Mike Dropbox Workspace

This is basically a GUI layer over top of the Android manifest.  Using the tabs across the bottom you can switch between the different categories and a corresponding form will appear.  If you click AndroidManifest.xml it will bring up a text view of the manifest.  Use whichever interface you prefer, it makes no difference in the end.

There are two changes we want to make to the manifest.  First we want the device to support rotation, so if the user rotates their device, the application rotates accordingly.  This is done by setting the property android:screenOrientation to fullsensor.  Next we want to grant the permission android.permission.VIBRATE.  If you do not add this permission calling a vibrate call will cause your application to crash!

 

Here is how my manifest looks with changes made:

<?xml version="1.0" encoding="utf-8"?>

<manifest xmlns:android="http://schemas.android.com/apk/res/android"

    package="com.gamefromscratch"

    android:versionCode="1"

    android:versionName="1.0" >

 

    <uses-sdk android:minSdkVersion="5" android:targetSdkVersion="17" />

    <uses-permission android:name="android.permission.VIBRATE"/>

 

    <application

        android:icon="@drawable/ic_launcher"

        android:label="@string/app_name" >

        <activity

            android:name=".MainActivity"

            android:label="@string/app_name"

            android:screenOrientation="fullSensor"

            android:configChanges="keyboard|keyboardHidden|orientation|screenSize">

            <intent-filter>

                <action android:name="android.intent.action.MAIN" />

                <category android:name="android.intent.category.LAUNCHER" />

            </intent-filter>

        </activity>

    </application>

 

</manifest>

The changes have been bolded above.  You want to be careful when you request additional permissions as they will be shown when the user installs your application.  Too many permissions and people start getting scared of your application.  Of course, if you need to do something that requires a permission there isn’t much you can do!  As to the screenOrientation value, this tells Android which direction to create your application as.  There are a number of options, Landscape and Portrait being two of the most common.  fullSensor basically means all directions supported.  This means you can rotate the device 360 degrees and it will be rotated accordingly.  On the other hand, if you select “user”, you cannot rotate the device 180 degrees, meaning you cannot use it upside down.  You can read more about the available properties in the link I provided earlier.

There is one last important thing to be aware of before moving on.  Your android project will actually have two AndroidManifest.xml files, one in the root directory another in the bin subfolder.  Be certain to use the one in the root directory, as the other one will be copied over!

 

Ok, now that we are fully configured, let’s jump into the code sample:

package com.gamefromscratch;

 

 

import com.badlogic.gdx.ApplicationListener;

import com.badlogic.gdx.Gdx;

import com.badlogic.gdx.Input.Orientation;

import com.badlogic.gdx.Input.Peripheral;

import com.badlogic.gdx.graphics.Color;

import com.badlogic.gdx.graphics.GL10;

import com.badlogic.gdx.graphics.g2d.BitmapFont;

import com.badlogic.gdx.graphics.g2d.SpriteBatch;

 

public class MotionDemo implements ApplicationListener {

private SpriteBatch batch;

private BitmapFont font;

private String message = "Do something already!";

private float highestY = 0.0f;

 

@Override

public void create() {

   batch = new SpriteBatch();

   font = new BitmapFont(Gdx.files.internal("data/arial-15.fnt"),false);

   font.setColor(Color.RED);

}

 

@Override

public void dispose() {

   batch.dispose();

   font.dispose();

}

 

@Override

public void render() {

   int w = Gdx.graphics.getWidth();

   int h = Gdx.graphics.getHeight();

   Gdx.gl.glClearColor(1, 1, 1, 1);

   Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT);

 

   batch.begin();

 

   int deviceAngle = Gdx.input.getRotation();

   Orientation orientation = Gdx.input.getNativeOrientation();

   float accelY = Gdx.input.getAccelerometerY();

   if(accelY > highestY)

      highestY = accelY;

 

   message = "Device rotated to:" + Integer.toString(deviceAngle) + " degrees\n";

   message += "Device orientation is ";

   switch(orientation){

      case Landscape:

         message += " landscape.\n";

         break;

      case Portrait:

         message += " portrait. \n";

         break;

      default:

         message += " complete crap!\n";

         break;

   }

 

   message += "Device Resolution: " + Integer.toString(w) + "," + Integer.toString(h) + "\n";

   message += "Y axis accel: " + Float.toString(accelY) + " \n";

   message += "Highest Y value: " + Float.toString(highestY) + " \n";

 

   if(Gdx.input.isPeripheralAvailable(Peripheral.Vibrator)){

      if(accelY > 7){

         Gdx.input.vibrate(100);

      }

   }

 

   if(Gdx.input.isPeripheralAvailable(Peripheral.Compass)){

      message += "Azmuth:" + Float.toString(Gdx.input.getAzimuth()) + "\n";

      message += "Pitch:" + Float.toString(Gdx.input.getPitch()) + "\n";

      message += "Roll:" + Float.toString(Gdx.input.getRoll()) + "\n";

   }

   else{

      message += "No compass available\n";

   }

 

   font.drawMultiLine(batch, message, 0, h);

 

   batch.end();

}

 

@Override

public void resize(int width, int height) {

   batch.dispose();

   batch = new SpriteBatch();

   String resolution = Integer.toString(width) + "," + Integer.toString(height);

   Gdx.app.log("MJF", "Resolution changed " + resolution);

}

 

@Override

public void pause() {

}

 

@Override

public void resume() {

}

 

}

 

When you run this program on a device, you should see:

Appresults

 

As you move the device, the various values will update.  If you raise your phone to be within about 30 degrees of completely upright it will vibrate.  Of course, this assumes that your device supports all these sensors that is!

 

The code itself is actually remarkably straight forward, LibGDX makes working with motion sensors remarkably easy, its just actually understanding the returned values that things get a bit more complicated.  The vast majority of the logic is in the render() method.  First we get the angle the device is rotated in.  This value is in degrees with 0 being straight in front of you parallel to your face.  One important thing to realize is this value will always have 0 as up, regardless to if you are in portrait or landscape mode.  This is something LibGDX does to make things easier for you, but is different behaviour than the Android norm.

Next we get the orientation of the device.  Orientation can be either landscape or portrait (wide screen vs tall screen).  Next we check the value of the accelerometer along the Y access using getAccelerometerY().  You can also check the accelerometer for movement in the X and Z axis using getAcceleromterX() and getAcceleromterZ() respectively.  Once again, LibGDX standardizes the axis directions, regardless to the devices orientation.  Speaking of which, Y is up.  The means if you hold your phone straight in front of you parallel to your face, the Y axis is what you would traditionally think of as up and down.  The Z axis would be in front of you, so if you made a push or pulling motion, this would be along the Z axis.  The X axis would track movements to the left and right.

So then, what exactly are the values returned by the accelerometer?  Well this part gets a bit confusing, as it measures both speed and position in a way.  If you hold your phone straight out in front of you, with the screen parallel to your face, it will return a value of 9.8.  That number should look familiar to you, it’s the speed a body falls due to gravity in meters per second.  Therefore if your phone is stationary and upright, its 9.8.  If you move the phone up parallel to your body, the value will rise above 9.8, the amount depends on how fast your are moving the phone.  Moving down on the other hand will return a value below 9.8.  If you put the phone down flat on a desk it will instead return 0. Flipping the phone upside down will instead return -9.8 if held stationary.  Obviously the same occurs along the X and Z axis, but instead that would indication motion left and right or in and out instead of up and down.

Ok, back to our code.  We check to see if the current accelY value is the highest and if it is, we record it to display.  Next we check what value the orientation returned and display the appropriate message.  We dump some information we’ve gathered out to be displayed on screen.  Next we make the very important call Gdx.input.isPeripheralAvailable().  This will return true if the users device supports the requested functionality.  First we check to see if the phone supports vibrating and if it does, we check if the phone is over 7.  Remember the value 9.8 represents straight up and down, so if its 7 or higher its within about 35 degrees of vertical.  If it is, we vibrate by calling vibrate(), the value passed is the number of milliseconds to vibrate for.

Next we check to see if the device has a compass.  If it does, you can check the position of the device relative to polar north.  Here are the descriptions of each value from Google’s documentation:

Azimuth, rotation around the Z axis (0<=azimuth<360). 0 = North, 90 = East, 180 = South, 270 = West
Pitch, rotation around X axis (-180<=pitch<=180), with positive values when the z-axis moves toward the y-axis.
Roll, rotation around Y axis (-90<=roll<=90), with positive values when the z-axis moves toward the x-axis.

You can read more about it here.

Finally we draw the message we have been composing on screen.

There is only one other very important thing to notice in this example:

public void resize(int width, int height) {

   batch.dispose();

   batch = new SpriteBatch();

   String resolution = Integer.toString(width) + "," + Integer.toString(height);

   Gdx.app.log("MJF", "Resolution changed " + resolution);

}

 

In the resize() method we dispose of and recreate our SpriteBatch().  This is because when you change the orientation of the devices from landscape to portrait or vice versa you invalidate the sprite batch, it is now the wrong size for your device.  Therefore in the resize() call, we recreate the SpriteBatch structure.

Programming , , ,