Subscribe to GameFromScratch on YouTube Support GameFromScratch on Patreon

27. August 2014

 

Today we are going to look at implementing physics in LibGDX.  This technically isn’t part of LibGDX itself, but instead is implemented as an extension.  The physics engine used in LibGDX is the popular Box2D physics system a library that has been ported to basically every single platform and language ever invented, or so it seems. We are going to cover how to implement Box2D physics in your 2D LibGDX game.  This is a complex subject so will require multiple parts.

 

If you’ve never used a Physics Engine before, we should start with a basic overview of what they do and how they work.  Essentially a physics engine takes scene information that you provide, then calculates “realistic” movement using physics calculations.  It goes something like this:

  • you describe all of the physics entities in your world to the physics engine, including bounding volumes, mass, velocity, etc
  • you tell the engine to update, either per frame or on some other interval
  • the physics engine calculates how the world has changed, what’s collided with what, how much gravity effects each item, current speed and position, etc
  • you take the results of the physics simulation and update your world accordingly.

 

Don’t worry, we will look at exactly how in a moment.

 

First we need to talk for a moment about creating your project.  Since Box2D is now implemented as an extension ( an optional LibGDX component ), you need to add it either manually or when you create your initial project.  Adding a library to an existing project is IDE dependent, so I am instead going to look at adding it during project creation… and totally not just because it’s really easy that way.

 

When you create your LibGDX project using the Project Generator, you simply specify which extensions you wish to include and Gradle does the rest.  In this case you simply check the box next to Box2d when generating your project like normal:

 

image

 

… and you are done.  You may be asking, hey what about Box2dlights?  Nope, you don’t currently need that one.  Box2dlights is a project for simulating lighting and shadows based off the Box2d physics engine.  You may notice in that list another entity named Bullet.  Bullet is another physic engine, although more commonly geared towards 3D games, possibly more on that at a later date.  Just be aware if you are working in 3D, Box2d isn’t of much use to you, but there are alternatives.

 

Ok, now that we have a properly configured project, let’s take a look at a very basic physics simulation.  We are simply going to take the default LibGDX graphic and apply gravity to it, about the simplest simulation you can make that actually does something.  Code time!

 

package com.gamefromscratch;

import com.badlogic.gdx.ApplicationAdapter;
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.graphics.GL20;
import com.badlogic.gdx.graphics.Texture;
import com.badlogic.gdx.graphics.g2d.Sprite;
import com.badlogic.gdx.graphics.g2d.SpriteBatch;
import com.badlogic.gdx.math.Vector2;
import com.badlogic.gdx.physics.box2d.*;

public class Physics1 extends ApplicationAdapter {
    SpriteBatch batch;
    Sprite sprite;
    Texture img;
    World world;
    Body body;

    @Override
    public void create() {

        batch = new SpriteBatch();
        // We will use the default LibGdx logo for this example, but we need a 
        sprite since it's going to move
        img = new Texture("badlogic.jpg");
        sprite = new Sprite(img);

        // Center the sprite in the top/middle of the screen
        sprite.setPosition(Gdx.graphics.getWidth() / 2 - sprite.getWidth() / 2,
                Gdx.graphics.getHeight() / 2);

        // Create a physics world, the heart of the simulation.  The Vector 
        passed in is gravity
        world = new World(new Vector2(0, -98f), true);

        // Now create a BodyDefinition.  This defines the physics objects type 
        and position in the simulation
        BodyDef bodyDef = new BodyDef();
        bodyDef.type = BodyDef.BodyType.DynamicBody;
        // We are going to use 1 to 1 dimensions.  Meaning 1 in physics engine 
        is 1 pixel
        // Set our body to the same position as our sprite
        bodyDef.position.set(sprite.getX(), sprite.getY());

        // Create a body in the world using our definition
        body = world.createBody(bodyDef);

        // Now define the dimensions of the physics shape
        PolygonShape shape = new PolygonShape();
        // We are a box, so this makes sense, no?
        // Basically set the physics polygon to a box with the same dimensions 
        as our sprite
        shape.setAsBox(sprite.getWidth()/2, sprite.getHeight()/2);

        // FixtureDef is a confusing expression for physical properties
        // Basically this is where you, in addition to defining the shape of the 
        body
        // you also define it's properties like density, restitution and others 
        we will see shortly
        // If you are wondering, density and area are used to calculate over all 
        mass
        FixtureDef fixtureDef = new FixtureDef();
        fixtureDef.shape = shape;
        fixtureDef.density = 1f;

        Fixture fixture = body.createFixture(fixtureDef);

        // Shape is the only disposable of the lot, so get rid of it
        shape.dispose();
    }

    @Override
    public void render() {

        // Advance the world, by the amount of time that has elapsed since the 
        last frame
        // Generally in a real game, dont do this in the render loop, as you are 
        tying the physics
        // update rate to the frame rate, and vice versa
        world.step(Gdx.graphics.getDeltaTime(), 6, 2);

        // Now update the spritee position accordingly to it's now updated 
        Physics body
        sprite.setPosition(body.getPosition().x, body.getPosition().y);

        // You know the rest...
        Gdx.gl.glClearColor(1, 1, 1, 1);
        Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
        batch.begin();
        batch.draw(sprite, sprite.getX(), sprite.getY());
        batch.end();
    }

    @Override
    public void dispose() {
        // Hey, I actually did some clean up in a code sample!
        img.dispose();
        world.dispose();
    }
}

 

The program running:

 

test

 

What's  going on here is mostly defined in the comments, but I will give a simpler overview in English.  Basically when using a Physics Engine, you create a physical representation for each corresponding object in your game.  In this case we created a physics object ( Body ) that went along with our sprite.  It’s important to realize, there is no actual relationship between these two objects.  There are a couple of components that go into a physics body, BodyDef which defines what type of body it is ( more on this later, for now realize DynamicBody means a body that is updated and capable of movement ) and FixtureDef, which defines the shape and physical properties of the Body.  Of course, there is also the World, which is the actual physics simulation.

 

So, basically we created a Body which is the physical representation of our Sprite in the physics simulation.  Then in render() we call the incredibly important step() method.  Step is what advances the physics simulation… basically think of it as the play button.  The physics engine then calculations all the various mathematics that have changes since the last call to step.  The first value we pass in is the amount of time that has elapsed since the last update.  The next two values control the amount of accuracy in contact/joint calculations for velocity and position. Basically the higher the values the more accurate your physics simulation will be, but the more CPU intensive as well.  Why 6 and 2?  ‘cause that’s what the LibGDX site recommend and that works for me.  At the end of the day these are values you can tweak to your individual game.  The one other critical take away here is we update the sprites position to match the newly updated body’s position.  Once again, in this example, there is no actual link between a physics body and a sprite, so you have to do it yourself.

 

There you go, the worlds simplest physics simulation.  There are a few quick topics to discuss before we move on.  First, units.

 

This is an important and sometimes tricky concept to get your head around with physics systems.  What does 1 mean?  One what?  The answer is, whatever the hell you want it to be, just be consistent about it!  In this particular case I used pixels.  Therefore 1 unit in the physics engine represents 1 pixel on the screen.  So when I said gravity is (0,-98) that means gravity is applied at a rate of –98 pixels along the y axis per second.  Just as commonly, 1 in the physics engine could be meters, feet, kilometer, etc… then you use a custom ratio for translating to and from screen coordinates.  Most physics systems, Box2d included, really don’t like you mixing your scales however.  For example, if you have a universe simulation where 1 == 100 miles, then you want to calculate the movement of an Ant at 0.0000001 x 100miles per hour, you will break the simulation, hard.  Find a scale that works well with the majority of your game and stick with it.  Extremely large and extremely small values within that simulation will cause problems.

 

Finally, a bit of a warning about how I implemented this demo and hopefully something I will cover properly at a later date.  In this case I updated the physics system in the render loop.  This is a possibility but generally wasteful.  It’s fairly common to run your physics simulation at a fixed rate ( 30hz and 60hz being two of the most common, but lower is also a possibility if processing restrained ) and your render loop as fast as possible.

 

In the next part we will give our object something to collide with, stay tuned.

Programming , , ,

9. August 2014

 

LibGDX, the cross platform, Java based, open source gaming library for iOS, Android, HTML5 and Desktop has just reached version 1.3.  The details of the new release:

image

 

  • API Addition: Added Input.isKeyJustPressed
  • API Addition: multiple recipients are now supported by MessageDispatcher, see https://github.com/libgdx/libgdx/wiki/Message-Handling#multiple-recipients
  • API Change: State#onMessage now takes the message receiver as argument.
  • API Addition: added StackStateMachine to the gdx-ai extension.
  • API change: ShapeRenderer: rect methods accept scale, more methods can work under both line and fill types, auto shape type changing.
  • API change: Built-in ShapeRenderer debugging for Stage, see https://github.com/libgdx/libgdx/pull/2011
  • Files#getLocalStoragePath now returns the actual path instead of the empty string synonym on desktop (LWJGL and JGLFW).
  • Fixed and improved xorshift128+ PRNG implementation.
  • Added support for Tiled’s animated tiles, and varying frame duration tile animations.
  • Fixed an issue with time granularity in MessageDispatcher.
  • Updated to Android API level 19 and build tools 19.1.0 which will require the latest Eclipse ADT 23.02, see http://stackoverflow.com/questions/24437564/update-eclipse-with-android-development-tools-23 for how things are broken this time…
  • Updated to RoboVM 0.0.14 and RoboVM Gradle plugin version 0.0.10
  • API Addition: added FreeTypeFontLoader so you can transparently load BitmapFonts generated through gdx-freetype via AssetManager, see FreeTypeFontLoaderTest.java
  • Preferences put methods now return “this” for chaining
  • Fixed issue 2048 where MessageDispatcher was dispatching delayed messages immediately.
  • API Addition: 3d particle system and accompanying editor, contributed by lordjone, see pull request 2005
  • API Addition: extended shape classes like Circle, Ellipse etc. with hashcode/equals and other helper methods, see pull request #2018
  • minor API change: fixed a bug in handling of atlasPrefixes, see pull request 2023
  • Bullet: btManifoldPoint member getters/setters changed from btVector3 to Vector3, also it is no longer pooled, instead static instances are used for callback methods
  • Added Intersector#intersectRayRay to detect if two 2D rays intersect, see pull request 2132
  • Bullet: ClosestRayResultCallback, AllHitsRayResultCallback, LocalConvexResult, ClosestConvexResultCallback and subclasses now use getter/setters taking a Vector3 instead of btVector3, see pull request #2175
  • 2d particle system supports pre-multiplied alpha.
  • Bullet: btIDebugDrawer/DebugDrawer now use pooled Vector3 instances instead of btVector3, see pull request #2174

 

 

You can download the LibGDX setup app here.  Of course, GameFromScratch.com has a complete set of LibGDX tutorials to get you started.

Programming, News ,

8. August 2014

 

I write a great many tutorials targeting newer developers and as a direct result I am exposed to a fair number of programming related questions.  I don’t mind in the least by the way, it’s why I run this site.

 

However, I notice one very common thread among those questions…

 

They often could have been solved with just a few minutes in the debugger.

 

Then it dawned on me.  Lots of newer programmers likely don’t know all that much about debugging.  Many are probably resorting to printing to the console to see how their program works.  For good reason too…  if you pick up say… a C++ book, it’s about the language, not the tooling.  Of course, there are dedicated books such as Beginning Visual C++ 2013 or the Eclipse IDE: Pocket Guide, but these tend not to be books beginners start with.  Hell, for someone just starting out, figuring out where the language begins and the IDE ends is challenging enough!

 

Which is all a shame, as basic debugging skills will make your life a hell of a lot easier.  Not only will it make solving problems much easier, but it will help a great deal in understanding how your language works.  Plus the most tragic part of all, it’s actually very simple and often incredibly consistent across tools and programming languages.

 

So, if you have no prior debugging experience, give me 20 minutes of your time.  I guarantee you it will be worthwhile, or your money back!

 

 

I am going to use a couple different languages/IDEs int this tutorial, but as you will see, the process is remarkably similar regardless to what language you use.  I am primarily going to start with Visual Studio, then illustrate how you can perform similar actions in other environments.  First a quick glossary of terms you are going to hear.

 

 

Glossary of terms

 

These are a few of the terms we are going to be covering during this tutorial.  Don’t worry over much if the following descriptions don’t make a lot of sense, it should be clearer by the time you finish.

 

Breakpoint

This one is critical, these are instructions that tell your code HEY, STOP RUNNING, I WANT TO LOOK AT SOMETHING HERE!  We will be using breakpoints extensively.  You can generally add/remove/enable/disable breakpoints.

 

Watch

This one is incredibly well named.  Basically these are variables you’ve said you want to keep a watch on the value of.

 

Local

Think of these like Watch expressions the IDE automatically made for you.  Basically every variable in local scope will be listed as a local.  Not all IDEs do this, but most do.

 

Expression Evaluation

This is powerful.  Basically you can type some code and see what result it returns, while your code is running.  Generally you do this once a breakpoint has been hit causing your code to pause and your debugger to be shown.

 

Call Stack

This is the hierarchy of function calls you are currently in.  For example, if you called myFunc() from main(), your callstack would look like

myFunc()

main().

 

Don’t worry, this should make sense shortly.

 

 

C++ and Visual Studio Debugger

 

I am going to start with Visual Studio/Visual C++ then show other platforms later on.  Once again, most of the process you see here is applicable to other environments.

 

Let’s start with this ultra simple code example:

void someFunction(int & inValue)
{
    inValue = 43;
}

int main(int argc, char ** argv)
{
    int i = 42;
    someFunction(i);
    return 0;
}

 

The code is extremely simple. We create a simple int, assign it a value, then pass it into a function that will assign it a different value. Now time to do some basic debugging.

 

The first thing you need to do is start debugging.  In Visual Studio, there are a couple ways to do this.  First thing, in C++, you need to tell it that you are building for debugging.  You see, when you make a debug build a few different things happen.  There are little bits of information added to your code that make the debugger work.  There are some other changes too, like memory being zeroed out, but those are beyond what we are talking about here.  The take away is, you need to build for debugging, then run the debugger, although generally this task is one and the same for you the developer. 

 

From the Visual Studio toolbar, you can do both:

image

Or, you can run from the Debug menu:

image

 

As you can see, F5 is also an option.  It’s worth noting, debug code generally runs a bit slower and bigger, so when you are finished development, you want to compile for release.

 

Ok, so that’s how we start the debugger, but in this code sample, it will simply start, run, then finish.  That’s not very exciting.

 

 

Enter the breakpoint!

 

Ok, now we are going to enter the wonderful world of breakpoints, your new best friends.  Let’s start by setting a breakpoint on our first line, where we declare i.  There are a number of ways of setting a breakpoint.  In the IDE you can right click the line of code you want to break on then select Breakpoint –> Insert Breakpoint, like so:

image

 

… for the sake of this tutorial, please just ignore Tracepoints, at least for now.

 

You can also set a breakpoint in the Debug menu, using Toggle Breakpoint, or by hitting F9:

image

 

The line of code you just set a breakpoint on should now have a red bullet in the margin:

 

image

 

Go ahead and press F5 to debug your program.  Things will go much differently this time, your code will stop executing on the line with the breakpoint.  You can hover your mouse over a variable to see it’s value:

 

image

 

In this case, you will see that the value is gibberish.  This is because i hasn’t been assigned yet.  Notice the little yellow arrow on the left hand side?  This is the line of code you are currently executing.

image

 

Stepping over the corpses of your vanquished foes

 

 

Now we need to navigate in the debugger.  This is done using a couple simple commands:

 

image

 

Step Into, Step Over and Step Out.  These are also available in the toolbar:

image

As you can see, you can also use the hotkey F11 and F10.  These keys change from program to program... if they didn’t, that would just make life too easy, wouldn’t it?

 

Now as to what these do…

Step Into, steps into the currently running line of code.  For example, if you are on a function, it will step into that function.  I will show this in a second.

Step Over jumps to the next line of code in the same scope.  Step out, jumps up on the callstack.  Again, I will explain this in a second. 

 

So what we want to do now is Step Over, and now it should look like this:

image

 

The little yellow arrow will now have advanced to the next line of code.  Notice now if we hover over the value of i, it is now 42 like we would have expected.  That is because the line of code has now executed.  This is a very important thing to realize… the line of code your breakpoint stopped on hasn’t executed yet.  So if you want to see what value is assigned to a variable, you generally want to set the breakpoint to the next line of code.

 

Now we want to “Step Into” the current line of code.  That is, we want to see what happens when someFunction() executes.  If we chose “Step Over”, we would jump to the next line ( return 0; ).  There is another important thing to realize here… even if you Step Over some code, it is still being run like normal, it just isn’t showing you it in the debugger.  That said, we want to see someFunction() in action, so choose Step Into or choose this icon:

 

image

 

Now the line of code jumps to the beginning of someFunction():

 

image

 

You can hover over parameters to see their value:

image

 

You can continue stepping in and over code like normal, or if you are done looking at someFunction ( or some other function someFunction calls ), you can choose Step Out to jump up on the callstack.

 

image

 

 

Callstack me, maybe?  No, I didn’t just make that pun did I?

 

 

Callstack… there’s that word again.  Now that we are actually in a bit of a call stack, let’s take a look at what I mean.

 

In Visual Studio, make sure the CallStack window is being shown.  This is controlled using Debug->Window->CallStack or CTRL + ALT + C like so:

image

 

A window like the following should appear:

image

 

The key to the name is “STACK”.  Think about it like a stack of plates at a cafeteria.  Each time a function is called, it’s like putting a plate on the stack.  The bottom most plate/function is the oldest, while the top most is the newest.  In this case, the call stack tells us that on line 9, in the function main, we called someFunction() and are currently running on line 2.  Speaking of “lines”, you have the option of toggling them on or off in Visual Studio ( and most other IDEs ).

 

The process of toggling line numbers on/off, isn’t incredibly straight forward.  That said, line numbers can be incredibly handy, so lets do it.  First select the menu Tools->Options…

image

 

Then in the resulting dialog on the left hand side scroll down until you find Text Editor->C/C++->General, then click the checkbox next to Line Numbers.  Obviously if you are using Visual Studio and a language other than C++, you need to pick the appropriate language:

 

image

 

Now line numbers will be displayed next to your code:

image

 

Ok… back to the callstack.  Double clicking an entry in the callstack brings you to that line of code.  In this trivial example, the utility is questionable.  However, when working with a real project, where the callstack might span multiple source files, it becomes a very quick and easy way to jump between source files and for seeing how you ended up where you are.  This obviously is a hell of a lot more useful when someFunction() is possible called from thousands of different locations for example.

 

 

Locals, no more witty titles after that last witless one…

 

Now let’s take a look at locals, a concept I mentioned earlier.  This is basically all the local ( non-global ) functions in the current scope.  While debugging inside someFunction, like so:

 

image

 

Open up the locals window.  Like before it can be toggled using the menu Debug->Windows->Locals:

 

image

 

Now you will have a new window, like so:

 

image

 

This is a list of all variables in the local scope.  Inside someFunction() there is only one value, it’s parameter inValue.  Here you can see that the current value is 42 and it’s data type is int reference.  As you step through someFunction, as values chance, they will be updated in the locals window. 

 

Step out of someFunction ( Shift + F11, or using the icon or menus listed above ), and you will see the locals change to those of main():

 

image

 

 

Now you see something somewhat unique to C and C++ ( and other languages with direct memory management ).  The value passed in, argv, is a pointer to a pointer of type char.  This is to say, it points to a pointer that points to the memory address of a char data type.  If that is greek to you at this point, don’t worry about it.  You will notice though that the locals window has done a couple very cool things.

 

First, you see a hex value:

 

image

 

 

This is the actual address of the pointer in memory.  After all, that is what pointers actually are, locations in memory.  This is incredibly useful in debugging, even if you yourself don’t use pointers, code you depend on might.  Look out for values like 0x0000000 or 0xFFFFFFFF.  If you look at the value of a pointer and it’s one of those values, your object wasn’t allocated or has been deleted.  These are some of the most common bugs you will find in C++ code.

 

The other neat thing that visual studio did was this:

 

image

 

The debugger was actually smart enough to go look at the data actually stored at the memory address this pointer points at.  Very handy.  You may also notice the triangle to the left of argv.  This is because there is more information available.  We will see this a bit more later.

 

 

Ok, now what?

 

So… what do you do once you have found what you were looking for?  You have a couple options.  Here they are from the debug toolbar:

 

image

 

Or using the following commands from the Debug menu:

 

image

 

One thing to keep in mind, when your programming is running, the options and menu’s available in Visual Studio are different.  For example, if you want to create a new project, you need to Stop Debugging before the menu options are even available.

 

 

Playing God with your Program

 

Ok, let’s rewind a bit and go back to the locals menu.  This time we are going to use a slightly different code example.

 

#include <iostream>
#include <string>

class MyClass{
public:
    int myInt;
    std::string myString;

    MyClass() :
        myInt(4), 
        myString("Hello World"), 
        myPrivateInt(5)
    {
    }

private:
    int myPrivateInt;
};
int main(int argc, char** argv)
{
    MyClass myClass;
    std::cout << myClass.myString;
    return 0;
}

 

Now let’s try setting a breakpoint on the final line of main, like so:

 

image

 

If you run this code, you will see:

 

image

 

You will have to ALT+TAB over to it, since debugger has given focus to Visual Studio.

 

Now lets look at some of the funky things you can do to a running program.  Now let’s set another breakpoint, this one on the cout line.  Remember, the code on the line you’ve breakpointed hasn’t been run yet!

 

image

 

Now restart or debug your program again.  It will hit the first breakpoint right away.  Now go look at the locals window again:

 

image

 

As you can see, data types composed of other data types, like myClass can be expanded to show all the other values that compose it.  Now let’s do something kinda neat.

 

You can change the values of a program as it is running.  Sorta…  For basic data types, it’s extremely straight forward.  For example, to change the value of myInt while debugging, simply right click it in the locals window and select Edit Value, like so:

 

image

 

You can now change the value:

 

image

 

From this point on ( during this debug session ), that value is now 42.  Of course, if the value is changed in code, it will of course be updated.  This allows you to tweak values interactively and see how it will affect your program’s execution.

 

With objects however, it is slightly more complicated.  In the previous case, we edited myInt which is a member of myClass.  But we couldn’t simply edit MyClass directly, as the debugger has no idea how.  Therefore you can’t just right click myString and select edit.  The debugger simply doesn’t know how to edit this data type ( this isn’t true for non-C++ languages ).  You can however modify the values that make up the string, like so:

 

image

 

As you can see, myString is of type std::basic_string, which is composed of an array of character values that make up the string.  So we can edit the character values to say… lower case our string.

 

image

 

Once again, Visual Studio is smart enough to understand the datatype.  So we could either enter the value ‘w’ or the ascii code 119.  Now if you continue execution of your program it will automatically run to the next breakpoint.  And if you look at the output, you should see:

 

image

 

One very important thing to note here… all of these changes are temporary.  They only last as long as the current debugging session.  The next time you run your program, it will run like normal.

 

 

I’m sick of these damned breakpoints

 

In that last example, when we selected Continue, we jumped to the next breakpoint, like so:

 

image

 

As you add more and more breakpoints to your code, they can make stepping through it incredibly annoying.  So, what can we do about that?

 

Well the first thing we can do is disable it.  Right click the red dot and select Disable Breakpoint, or select the line and hit CTRL + F9

 

image

 

And the breakpoint will be disabled.  Now it will show up as a hollow circle:

 

image

 

This allows you to re-enable it later, but until you do, the debugger will completely ignore the breakpoint.  You can re-enable it the same way you disabled it.  Right clicking and selecting enable, or by hitting CTRL + F9.

 

 

You can also remove a breakpoint complete in a couple ways.  First you single left click the red dot, and it will be removed.  The F9 key will also toggle a breakpoint on and off completely.  You can also right click and select Delete Breakpoint ( see shot above ).

 

Sometimes you want to remove or disable/enable them all at once.  You can do this using the debug menu:

image

 

Breakpoints are your friend, learn to love them!

 

I’m Watching You!

 

Now we are going to look at two final concepts, watches and expressions.  Let’s start with watches.

Until this point, we’ve only been able to look at variables declared in the local scope.  That’s all well and good, but what happens when we want to watch a value declared in a different scope?  Don’t worry, the debugger’s got you covered!

 

Consider this simple code example and breakpoint:

 

image

 

At this point in execution, your locals will look like:

 

image

That said, stringInADifferentScope has already been declared and allocated… how would you look at this value?

 

Well, there are two ways.  As you may be able to guess from the preamble, they are watches and expressions.  A watch is a variable you are keeping an eye on, even if its not currently local.  You can set a watch while debugging code by right clicking the variable and selecting Add Watch:

 

image

 

Now you can look at watches in the watch window using the menu Debug->Windows->Watch->Watch 1 ( or CTRL+ALT+W then 1 ).  You can have up to 4 sets of Watch windows in Visual Studio.  Now you can inspect the value of watched variables at any time:

 

image

If you watch a variable that isn’t in scope, it tells you:

 

image

 

When the variable notInScope comes back in scope, it’s value can be retrieved by hitting the refresh icon.

 

The other option is Evaluate Expression, which is called QuickWatch in Visual Studio.  It’s an incredibly cool feature.  You can invoke QuickWatch while debugging by right clicking and selecting QuickWatch…  or by pressing Shift + F9.

 

image

 

This opens a dialog that allows you to enter whatever value you want into the Expression tab, then press Re-Evaluate and it will look up the value:

 

image

 

The Add Watch button allows you to add the selected Expression to the watch window we just saw.

 

The cool thing about this you can actually call some functions on your object and get the results:

 

image

 

 

On One Condition

 

Back to breakpoints for a second, then I am done I promise.

Consider the following code sample:

 

image

 

This is a very common bug scenario, but you really don’t want to run through the debugger 100K times do you?  Generally with these kinds of errors, its only the last couple of iterations you want to look at.  Fortunately we have something called a conditional breakpoint.  This, as the name suggests, will only break if a certain condition is met.

 

Add a breakpoint like normal.  Add it to the line inside of the loop.  Now right click the red dot and selection Condition…

 

image

 

Now you can set the condition you will break on.

 

image

 

The breakpoint icon will now have a white plus in it:

 

image

 

Next time you run the code, it will only trip when the condition is hit:

 

image

 

Unfortunately, in Visual Studio, conditional breakpoints can make your code ungodly slow, so only use them when absolutely required.  In other languages and IDEs, this isn’t always the case.  I honestly think this is a bug in Visual Studio, as the above code should not require several seconds to evaluate, even with the additional overhead. 

 

 

One of the keys to happiness is a bad memory

 

One other thing that can be incredibly useful, especially in C++ is to look at a location in memory.  This functionality isn’t always available, depending on the language you are using.  In Visual C++, it’s incredibly easy and useful.  In the previous example, we filled an array of char with 100K exclamation marks ( at least, once we remove the = sign from <= :) ).  Let’s say we wanted to look at memory for data.

 

In the Debug->Windows menu, select Memory->Memory 1

image

 

A window will open up like this:

image

 

That is showing what’s in memory, starting at the address 0x00287764.  What you want is the address of the variable data.  In the address box enter the value &data.  (For non-C++ programmers reading this, & is the address of operator, which returns the memory location of a variable ). 

 

Now you will see:

 

image

 

As you can see, data is located at 0x006D781C and is full of exclamation marks ( shown on the right ), which is represented by ascii character code 21 ( as shown on the left ).  Looking at memory can often help you find nasty bugs.

 

 

Debugging in other languages/IDEs

 

The instructions above were obviously C++ and Visual Studio related, but you will find other than the windows looking a bit different, some slightly different names and different hot keys, the process is almost identical.  Instead of going through the same process for every language, I will instead point out where all of these things are located or what they are called.

 

 

 

Java in Eclipse

 

Starting/Stopping:

Just like in Visual Studio, there is both a debug and run mode.  In order to debug in Eclipse you need to run using debug.  There is a menu option available:

image

You can also right click your project in the Package Explorer and select Debug As->Java Application.

image

 

Or using the Debug icon in the toolbar:

image

 

When you debug in Eclipse, you will be prompted to open in Debug perspective:

image

 

A perspective is simply a collection of windows and toolbars to accomplish a given task… such as debugging.  You can switch between perspectives using the top bar:

image

Or using the menu Window->Open Perspective.

 

 

Setting a Breakpoint:

You can set a breakpoint in a number of ways in Eclipse.  In a source window, you can right click the side column and select Toggle Breakpoint ( or CTRL+SHIFT+B ).  This adds a breakpoint, or removes on if there is one currently added.

image

You can also toggle a breakpoint using the Run menu:

image

 

Step Into/Step Over/Step Out:

While running in Debug perspective, you can perform stepping using the toolbar:

image

You can also resume and stop your program using this toolbar.  The icons are Step In, Stop Over and Step Out, from left to right.

 

You can also step using the Run Menu:

image

You can also use F5/F6/F7 to control stepping.

 

Watch Window

In Eclipse, Locals and Watch are in the same view, “Variables”. 

image

 

Variables should be available automatically when you launch into Debug perspective.  However you can also open It using the menu Window->Show View->Variables.  Or Alt+Shift+Q, then V.

 

image

 

Local Window:

See above.

 

Evaluate Expression:

Evaluate expression is called “Expressions” in Eclipse and is available using the same menu you used to open Variables.

image

Once again, you can dynamically execute code and see the results using Expressions.

 

Conditional Breakpoints

To set a conditional breakpoint in Eclipse, add a breakpoint like normal.  Then right click the dot and select Breakpoint Properties:

image

Then in the resulting dialog, check condition and enter your condition logic in the box below:

image

 

 

JavaScript in Chrome

 

Starting/Stopping:

In Chrome ( and other browsers, IE, Opera, Safari and Firefox are all very similar in functionality ), there is no such thing as debug or release, unless I suppose you count minified code.  Simply set a breakpoint in your JavaScript and refresh the page.  You will however have to open Developer Tools to be able to set a breakpoint.  Do this by clicking the Chrome menu button, selecting Tools->Developer Tools

image

 

Or as you can see above, press F12 or Ctrl + Shift + I.  Memorize that key combo, trust me…

 

Setting a Breakpoint:

To set a breakpoint, in the go to a source file:

image

 

Then in the source listing, right click on the left column and select Add Breakpoint:

image

… bet you can guess how to set a Conditional Breakpoint… ;)

You can also toggle a breakpoint using CTRL + B.

 

Step Into/Step Over/Step Out:

Step Over: F10

Step Into: F11

Step Out: Shift + F11

 

Or you can use the toolbar:

image

 

 

Watch Window

Window is located on the right hand side of the developer tools and is called Watch Expressions:

image

 

Expressions and Watches are combined into the same interface.  You can simply add a new one by clicking the + icon, then type code accordingly:

image

 

Local Window:

Called Scope Variables in Chrome.  Located in same area:

image

 

Evaluate Expression:

See watch above.

 

Conditional Breakpoints:

Just like adding a regular breakpoint, but instead choose Conditional Breakpoint.

image

Then type your conditional logic code:

image

 

WebStorm / JavaScript

 

Ok.. I’m just being lazy here.  I remember I already wrote an article about debugging in WebStorm… recycling is good, no? ;)

 

Debugging your app in WebStorm

 

It actually covers 100% of what we just talked about above, except of course the memory view, as it isn’t applicable.

 

Ok, Done talking now

 

As you can see, across tools the experience is very similar.  Some IDEs are worse ( Xcode… ), some are very limited ( Haxe in FlashDevelop ), but generally the process is almost always exactly the same.  Of course, I only looked at a couple IDEs but you will find the experience very consistent in several different IDEs.  It’s mostly a matter of learning a few new hotkeys and window locations.

 

One area that is massively different in command line debuggers, such as gdb.  You are still doing basically the same things, just no nice UI layer over top.  A discussion of gdb debugging is way beyond the scope of this document and there’s tons of information out there.  Heck, there are books written on the subject!

 

Hopefully that process was useful to you.  A while back I posted an example where the debugger saved my ass if you want to see this actual process in action.  Debugging should be a part of your development process, it will make your life a hell of a lot easier, and your hair a hell of a lot less white.

 

Let me know if that wasn’t very clear, this tutorial may actually require a step by step video companion to go along with it.  If so, please let me know.

Programming , , , ,

6. August 2014

 

jrenner/obfuscate( on reddit ) released his in development 3D engine, GDX-Proto earlier today.  In his own words:

 

GDX-Proto is a lightweight 3d engine built with two main objectives:

  • Provide an open source codebase showing how to do many basic and essential things for 3d games with libgdx, a cross-platform Java game framework.
  • Provide a simple, extensible 3d engine that takes care of lower-level things such as physics and networking.

 

While the current version is implemented as a First Person Shooter (FPS) demo, the code is highly adaptable for other uses, without too much work.

Overview of Features

Graphics
  • Basic 3d rendering using a slightly modified version of the default libgdx 3d shader. It takes advantage of the new libgdx 3D API.
  • 3D Particle system based on the new libgdx 3d particle system (version 1.2.1+, not included in 1.2.0)
Physics
  • The Bullet physics library is used for collision detection, but not for collision resolution. This allows for fast and efficient collision detection without the performance penalties of a fully simulated bullet world. A default collision resolution system is included in the Physics class, but it can be modifided to suit your needs.
  • Raycasting for projectile hit detection
Networking
  • Supports local or online play
  • KryoNet based
  • Mix of TCP and UDP where appropriate
  • Entity interpolation
  • Client prediction (for movement only, not yet implemented for projectiles)
  • Simple chat system
  • Supports libgdx headless backend for creating a headless server, such as on a VPS
  • Server transmits level geometry to client upon connection
  • "The server is the man": Most logic is run server-side to prevent cheats or hacking.
Other
  • Basic Entity system with DynamicEntities, represented by either Decals (Billboard sprites) or 3D models
  • Movement component class handles acceleration, velocity, position, rotation, max speeds
  • Subclasses of Movement: GroundMovement and FlyingMovement
  • Optional logging to file, see Log class

 

Right now it’s pretty early on.  It’s not actually a library as of yet, but instead a single project with a sample FPS.  In the future I believe it will be refactored into a more traditional library.  Right now though, it does provide a solid foundation for building a 3D game on top of LibGDX.  Right now, LibGDX provides only a relatively low level 3D layer and this project builds on top of it.

 

Getting started is extremely simple.  First clone the git:

git clone https://github.com/jrenner/gdx-proto.git

Then run the project:

gradlew desktop:run

 

image

 

You can navigate around using standard WASD keys.  For the Gradle averse that want to type the demo out, you can download a playable jar here.  The entire project is available on Github here.

Programming ,

8. July 2014

 

In this part of the LibGDX tutorial series we are going to take a look at using GLSL shaders.  GLSL standards for OpenGL Shader Language and since the move from a fixed to programmable graphics pipeline, Shader programming has become incredibly important.  In fact, every single thing rendered with OpenGL has at least a pair of shaders attached to it.  It’s been pretty transparent to you till this point because LibGDX mostly takes care of everything for you.  When you create a SpriteBatch object in LibGDX, it automatically creates a default vertex and fragment shader for you.  If you want more information on working with GLSL I put together the OpenGL Shader Programming Resource Round-up back in May.  It has all the information you should need to get up to speed with GLSL.  For more information on OpenGL in general, I also created this guide.

 

Render Pipeline Overview

 

To better understand the role of GL shaders, it’s good to have a basic understanding of how the modern graphics pipeline works.  This is the high level description I gave in PlayStation Mobile book, it’s not plagiarism because I’m the author. :)

 

A top-level view of how rendering occurs might help you understand the shader process. It all starts with the shader program, vertex buffers, texture coordinates, and so on being passed in to the graphics device. Then this information is sent off to a vertex shader, which can then transform that vertex, do lighting calculations and more (we will see this process shortly). The vertex shader is executed once for every vertex and a number of different values can be output from this process (these are the out attributes we saw in the shader earlier). Next the results are transformed, culled, and clipped to the screen, discarding anything that is not visible, then rasterized, which is the process of converting from vector graphics to pixel graphics, something that can be drawn to the screen.

The results of this process are fragments, which you can think of as "prospective pixels," and the fragment are passed in to the fragment shader. This is why they are called fragment shaders instead of pixel shaders, although people commonly refer to them using either expression. Once again, the fragment shader is executed once for each fragment. A fragment shader, unlike a vertex shader, can only return a single attribute, which is the RGBA color of the individual pixel. In the end, this is the value that will be displayed on the screen. It sounds like a horribly complex process, but the GPUs have dedicated hardware for performing exactly such operations, millions upon millions of times per second. That description also glossed over about a million tiny details, but that is the gist of how the process occurs.

 

So basically shaders are little programs that run over and over again on the data in your scene.  A vertex shader works on the vertices in your scene ( predictably enough… ) and are responsible for positioning each vertex in the world.  Generally this is a matter of transforming them using some kind of Matrix passed in from your program.  The output of the Vertex shader is ultimately passed to a Fragment shader.  Fragment shaders are basically, as I said above, prospective pixels.  These are the actual coloured dots that are going to be drawn on the users screen.  In the fragment shader you determine how this pixel will appear.  So basically a vertex shader is a little C-like program that is run for each vertex in your scene, while a fragment shader is run for each potential pixel.

 

There is one very important point to pause on here…  Fragment and Vertex shaders aren’t the only shaders in the modern graphics pipeline.  There are also Geometry shaders.  While vertex shaders can modify geometry ( vertices ), Geometry shaders actually create new geometry.  Geometry shaders were added in OpenGL 3.2 and D3D10.  Then in OpenGL4/D3D11 Tessellation shaders were added.  Tessellation is the process of sub-dividing a surface to add more detail, moving this process to silicon makes it viable to create much lower detailed meshes and tessellate them on the fly.  So, why are we only talking about Fragment and Vertex shaders?  Portability.  Right now OpenGL ES and WebGL do not support any other shaders.  So if you want to support mobile or WebGL, you can’t use these other shader types.

 

SpriteBatch and default Shaders

 

As I said earlier, when you use SpriteBatch, it provides a default Vertex and Fragment shader for you.  Let’s take a look at each of them now.  Let’s do it in the order they occur, so let’s take a look at the vertex shader first:

 

attribute vec4 a_position;
attribute vec4 a_color;
attribute vec2 a_texCoord;

uniform mat4 u_projTrans;

varying vec4 v_color;
varying vec2 v_texCoords;

void main()
{
    v_color = a_color;
    v_color.a = v_color.a * (256.0/255.0);
    v_texCoords = a_texCoord + 0;
    gl_Position =  u_projTrans * a_position;
}

 

As I said, GLSL is a very C-like language, right down to including a main() function as the program entry point.  There are a few things to be aware of here.  First are attribute and uniform  variables.  These are variables that are passed in from your source code.  LibGDX takes care of most of these for you, but if you are going to write your own default shader, LibGDX expects all of them to exist.  So then, what is the difference between a uniform and attribute variable?  A uniform stays the same for every single vertex.  Attributes on the other hand can vary from vertex to vertex.  Obviously this can have performance implications, so if it makes sense, prefer using a uniform.  A varying value on the other hand can be thought of as the return value, these values will be passed on down the rendering pipeline ( meaning the fragment shader has access to them ).  As you can see from the use of gl_Position, OpenGL also has some built in values.  For vertex shaders there are gl_Position and gl_PointSize.  Think of these as uniform variables provided by OpenGL itself.  gl_Position is ultimately the position of your vertex in the world.

 

As to what this script does, it mostly just prepares a number of variables for the fragment shader, the color, the normalized ( 0 to 1 ) alpha value and the texture to bind to, in this case texture unit 0.  This is set by calling Texture.Bind() in your code, or is called by LibGDX for you.  Finally it positions the vertex in 3D space by multiplying the vertices position by the transformation you passed in as u_projTrans.

 

Now let’s take a quick look at the default fragment shader:

#ifdef GL_ES
#define LOWP lowp
    precision mediump float;
#else
    #define LOWP
#endif

varying LOWP vec4 v_color;
varying vec2 v_texCoords;

uniform sampler2D u_texture;

void main()
{
    gl_FragColor = v_color * texture2D(u_texture, v_texCoords);
}

 

As you can see, the format is very similar.  The ugly #ifdef allows this code to work on both mobile and higher end desktop machines.  Essentially if you are running OpenGL ES then the value of LOWP is defined as lowp, and precision is set to medium.  In real world terms, this means that GL ES will run at a lower level of precision for internal calculations, both speeding things up and slightly degrading the result. 

The values v_color and v_texCoords were provided by the vertex shader.  A sampler2D on the other hand is a special glsl datatype for accessing the texture bound to the shader.  gl_FragColor is another special built in variable ( like vertex shaders, fragment shaders have some GL provided variables, many more than Vertex shaders in fact ), this one represents the output color of the pixel the fragment shader is evaluating.  texture2D essentially returns a vec4 value representing the pixel at UV coordinate v_texCoords in texture u_texture.  The vec4 represents the RGBA values of the pixel, so for example (1.0,0.0,0.0,0.5) is a 50% transparent red pixel.  The value assigned to gl_FragColor is ultimately the color value of the pixel displayed on your screen.

 

Of course a full discussion on GLSL shaders is wayyy beyond the scope of this document.  Again if you need more information I suggest you start here.  I am also no expert on GLSL, so you are much better off learning the details from someone else! :)  This does however give you a peek behind the curtain at what LibGDX is doing each frame and is going to be important to us in just a moment.

 

Changing the Default Shader

 

There comes a time where you might want to alter the default shader and replace it with one of your own.  This process is actually quite simple, let’s take a look.  Let’s say for some reason you wanted to render your game entirely in black and white?  Here are a simple vertex and fragment shader combo that will do exactly this:

 

Vertex shader:

attribute vec4 a_position;
attribute vec4 a_color;
attribute vec2 a_texCoord0;

uniform mat4 u_projTrans;

varying vec4 v_color;
varying vec2 v_texCoords;

void main() {
    v_color = a_color;
    v_texCoords = a_texCoord0;
    gl_Position = u_projTrans * a_position;
}

Fragment shader:

#ifdef GL_ES
    precision mediump float;
#endif

varying vec4 v_color;
varying vec2 v_texCoords;
uniform sampler2D u_texture;
uniform mat4 u_projTrans;

void main() {
        vec3 color = texture2D(u_texture, v_texCoords).rgb;
        float gray = (color.r + color.g + color.b) / 3.0;
        vec3 grayscale = vec3(gray);

        gl_FragColor = vec4(grayscale, 1.0);
}

I saved each file as vertex.glsl and shader.glsl respectively, to the project assets directory.  The shaders are extremely straight forward.  The Vertex is in fact just the default vertex shader from LibGDX.  Once again remember you need to provide certain values for SpriteBatch to work… don’t worry, things will blow up and tell you if they are missing from your shader! :)  The fragment shader is simply sampling the RGB value of the current texture pixel, getting the “average” value of the RGB values and using that as the output value.

 

Enough with shader code, let’s take a look at the LibGDX code now:

package com.gamefromscratch;

import com.badlogic.gdx.ApplicationAdapter;
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.graphics.GL20;
import com.badlogic.gdx.graphics.Texture;
import com.badlogic.gdx.graphics.g2d.Sprite;
import com.badlogic.gdx.graphics.g2d.SpriteBatch;
import com.badlogic.gdx.graphics.glutils.ShaderProgram;

public class ShaderTestApp extends ApplicationAdapter {
    SpriteBatch batch;
    Texture img;
    Sprite sprite;
    String vertexShader;
    String fragmentShader;
    ShaderProgram shaderProgram;

    @Override
    public void create () {
        batch = new SpriteBatch();
        img = new Texture("badlogic.jpg");
        sprite = new Sprite(img);
        sprite.setSize(Gdx.graphics.getWidth(), Gdx.graphics.getHeight());

        vertexShader = Gdx.files.internal("vertex.glsl").readString();
        fragmentShader = Gdx.files.internal("fragment.glsl").readString();
        shaderProgram = new ShaderProgram(vertexShader,fragmentShader);
    }

    @Override
    public void render () {
        Gdx.gl.glClearColor(1, 0, 0, 1);
        Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
        batch.begin();
        batch.setShader(shaderProgram);
        batch.draw(sprite,sprite.getX(),sprite.getY(),sprite.getWidth(),sprite.getHeight());
        batch.end();
    }
}

 

And when you run it:

image

 

Tada, your output is grayscale!

As to what we are doing in that code, we load each shader file as a string.  When then create a new ShaderProgram passing in a vertex and fragment shader.  The ShaderProgram is the class the populates all the various variables that your shaders expect, bridging the divide between the Java world and the GLSL world.  Then in render() we set our ShaderProgram as active by calling setShader().  Truth is, we could have done this just once in the create method instead of once per frame.

 

Multiple Shaders per Frame

 

In the above example, when we set the shader program, it applied to all of the output.  That’s nice if you want to render the entire world in black and white, but what if you just wanted to render a single sprite using your shader?  Well fortunately that is pretty easy, you simply change the shader again.  Consider:

package com.gamefromscratch;

import com.badlogic.gdx.ApplicationAdapter;
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.graphics.GL20;
import com.badlogic.gdx.graphics.Texture;
import com.badlogic.gdx.graphics.g2d.Sprite;
import com.badlogic.gdx.graphics.g2d.SpriteBatch;
import com.badlogic.gdx.graphics.glutils.ShaderProgram;

public class ShaderTest2 extends ApplicationAdapter {
    SpriteBatch batch;
    Texture img;
    Sprite leftSprite;
    Sprite rightSprite;
    String vertexShader;
    String fragmentShader;
    ShaderProgram shaderProgram;

    @Override
    public void create () {
        batch = new SpriteBatch();
        img = new Texture("badlogic.jpg");
        leftSprite = new Sprite(img);
        rightSprite = new Sprite(img);

        leftSprite.setSize(Gdx.graphics.getWidth()/2, Gdx.graphics.getHeight());
        leftSprite.setPosition(0,0);
        rightSprite.setSize(Gdx.graphics.getWidth()/2, Gdx.graphics.getHeight());
        rightSprite.setPosition(Gdx.graphics.getWidth()/2,0);

        vertexShader = Gdx.files.internal("vertex.glsl").readString();
        fragmentShader = Gdx.files.internal("fragment.glsl").readString();
        shaderProgram = new ShaderProgram(vertexShader,fragmentShader);
    }

    @Override
    public void render () {
        Gdx.gl.glClearColor(1, 0, 0, 1);
        Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);

        batch.setShader(null);
        batch.begin();
        batch.draw(leftSprite, leftSprite.getX(), leftSprite.getY(), leftSprite.getWidth(), leftSprite.getHeight());
        batch.end();

        batch.setShader(shaderProgram);
        batch.begin();
        batch.draw(rightSprite, rightSprite.getX(), rightSprite.getY(), rightSprite.getWidth(), rightSprite.getHeight());
        batch.end();
    }
}

 

And when you run it:

image

 

One using the default shader, one sprite rendered using the black and white shader.  As you can see, it’s simply a matter of calling setShader() multiple times.  Calling setShader() but passing in null restores the default built-in shader.  However, each time you call setShader() there is a fair amount of setup done behind the scenes, so you want to minimize the number of times you want to call it.  Or…

 

Setting Shader on a Mesh Object

 

Each Mesh object in LibGDX has it’s own ShaderProgram.  Behind the scenes SpriteBatch is actually creating a large single Mesh out of all the sprites in your screen, which are ultimately just textured quads.  So if you have a game object that needs fine tune shader control, you may consider rolling your own Mesh object.  Let’s take a look at such an example:

package com.gamefromscratch;

import com.badlogic.gdx.ApplicationAdapter;
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.graphics.*;
import com.badlogic.gdx.graphics.g2d.Sprite;
import com.badlogic.gdx.graphics.g2d.SpriteBatch;
import com.badlogic.gdx.graphics.glutils.ShaderProgram;

public class MeshShaderApp extends ApplicationAdapter {
    SpriteBatch batch;
    Texture texture;
    Sprite sprite;
    Mesh mesh;
    ShaderProgram shaderProgram;

    @Override
    public void create () {
        batch = new SpriteBatch();
        texture = new Texture("badlogic.jpg");
        sprite = new Sprite(texture);
        sprite.setSize(Gdx.graphics.getWidth(),Gdx.graphics.getHeight());

        float[] verts = new float[30];
        int i = 0;
        float x,y; // Mesh location in the world
        float width,height; // Mesh width and height

        x = y = 50f;
        width = height = 300f;

        //Top Left Vertex Triangle 1
        verts[i++] = x;   //X
        verts[i++] = y + height; //Y
        verts[i++] = 0;    //Z
        verts[i++] = 0f;   //U
        verts[i++] = 0f;   //V

        //Top Right Vertex Triangle 1
        verts[i++] = x + width;
        verts[i++] = y + height;
        verts[i++] = 0;
        verts[i++] = 1f;
        verts[i++] = 0f;

        //Bottom Left Vertex Triangle 1
        verts[i++] = x;
        verts[i++] = y;
        verts[i++] = 0;
        verts[i++] = 0f;
        verts[i++] = 1f;

        //Top Right Vertex Triangle 2
        verts[i++] = x + width;
        verts[i++] = y + height;
        verts[i++] = 0;
        verts[i++] = 1f;
        verts[i++] = 0f;

        //Bottom Right Vertex Triangle 2
        verts[i++] = x + width;
        verts[i++] = y;
        verts[i++] = 0;
        verts[i++] = 1f;
        verts[i++] = 1f;

        //Bottom Left Vertex Triangle 2
        verts[i++] = x;
        verts[i++] = y;
        verts[i++] = 0;
        verts[i++] = 0f;
        verts[i] = 1f;

        // Create a mesh out of two triangles rendered clockwise without indices
        mesh = new Mesh( true, 6, 0,
                new VertexAttribute( VertexAttributes.Usage.Position, 3, ShaderProgram.POSITION_ATTRIBUTE ),
                new VertexAttribute( VertexAttributes.Usage.TextureCoordinates, 2, ShaderProgram.TEXCOORD_ATTRIBUTE+"0" ) );

        mesh.setVertices(verts);

        shaderProgram = new ShaderProgram(
                Gdx.files.internal("vertex.glsl").readString(),
                Gdx.files.internal("fragment.glsl").readString()
                );
    }

    @Override
    public void render () {

        Gdx.gl20.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
        Gdx.gl20.glClearColor(0.2f, 0.2f, 0.2f, 1);
        Gdx.gl20.glClear(GL20.GL_COLOR_BUFFER_BIT);
        Gdx.gl20.glEnable(GL20.GL_TEXTURE_2D);
        Gdx.gl20.glEnable(GL20.GL_BLEND);
        Gdx.gl20.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA);

        batch.begin();
        sprite.draw(batch);
        batch.end();

        texture.bind();
        shaderProgram.begin();
        shaderProgram.setUniformMatrix("u_projTrans", batch.getProjectionMatrix());
        shaderProgram.setUniformi("u_texture", 0);
        mesh.render(shaderProgram, GL20.GL_TRIANGLES);
        shaderProgram.end();
    }
}

And when you run it:

 

image

This sample is long but fairly simple.  In create() we create the geometry for a quad by defining 2 triangles.  We then load our ShaderProgram just like we did in the earlier example.  You may notice in creating the Mesh we define two VertexAttribute values and bind them to values within our ShaderProgram.  These are the input values into the shader.  Unlike with SpriteBatch and the default shader, you need to do a bit more of the behind the scenes work when rolling your own Mesh.

 

Then in render() you see we work with the SpriteBatch normally but then draw our Mesh object using Mesh.render, passing in the ShaderProgram.  Texture.bind() is what binds the texture from LibGDX to texture unit 0 in the GLSL shader.  We then pass in our required uniform values using setUniformMatrix and setUniformi ( as in int ).  This is how you set up uniform values from the Java side of the fence.  u_texture is saying which texture unit to use, while u_projTrans is the transformation matrix for positioning items within our world.  In this case we are simply using the projection matrix from the SpriteBatch.

 

Using a Mesh instead of a Sprite has some disadvantages however.  When working with Sprites, all geometry is batched into a single object and this is good for performance.  More importantly, with Mesh you need to roll all the functionality you need from Sprite as you need it.  For example, if you want to support scaling or rotation, you need to provide that functionality.

Programming , ,

Month List

Popular Comments