Subscribe to GameFromScratch on YouTube Support GameFromScratch on Patreon

5. November 2013

A bit of bittersweet news today out of Microsoft:

If one thing has become clear as we’ve been working on ID@Xbox, our independent developer self-publishing program for Xbox One, it’s that today’s independent game developers are using middleware to help realize their visions more than ever. Of course, middleware isn’t cheap.
One of the cool things about working at Microsoft is that we have access to pretty amazing resources. For independent developers though, tools like Unity on console can cost quite a bit.

We talked internally at ID@Xbox about ways we could help developers for Xbox One. Many developers we talk to are using Unity today to get up and running quickly, and to be able to harness the power of hardware and realize their creative visions without spending tons of time on technology development. We thought about paying for some developers’ Unity licenses but the more we talked about it, the more we felt paying for some developers’ licenses and not others just didn’t feel right.  

To us, ID@Xbox is about providing a level playing field for all developers. So, we worked with Unity and we’re pleased to announce that, when released in 2014, the Xbox One add-on for Unity will be available at no cost to all developers in the ID@Xbox program, as will special Xbox One-only Unity Pro seat licenses for Xbox One developers in the ID@Xbox program.

Will we devote marketing and promotion to promising looking titles in development? Of course. But we want to make sure the dev who’s working away in Omaha, or Coventry, or Chiba will have the same shot to realize their vision on Xbox One as one of my developer friends we hang out with in Seattle or at a trade show like GDC or Gamescom. Because at the end of the day, we want gamers to pick the hits. That’s what Xbox One is all about: One games store, the best discovery tools on console, and a powerful, equal playing field for all games, from developers big and small.

This announcement is cool for a bunch of reasons. The Unity add-on for Xbox One supports every element of Xbox One, from Kinect to SmartGlass to the impulse triggers of the new controller. Using Unity, developers will be able to take advantage of all aspects of Xbox One, which is rad. More importantly, Unity is available for Windows and Windows Phone too (and yes, the add-on is available at no cost to developers for Windows Phone and Windows 8 store games). So from one base game, developers can ship their games across all Microsoft platforms. For more details on Microsoft’s partnership with Unity, check out this Xbox Wire post from BUILD 2013.

As always, our goal at ID@Xbox and Microsoft remains the same: We want to lower friction for developers on Microsoft platforms to make sure gamers get access to the broadest and deepest library of amazing games on the planet. We’re also excited to work with other middleware and service providers to drive value for independent developers, and we hope to have even more announcements that directly benefit developers.


You can read the entire Microsoft blog post here.  You can also read about it on Unity’s blog here.  Here is a snippet of their announcement below:

Unity and Microsoft will now be working together to bring the Xbox One  deployment add-on to all developers registered with the ID@Xbox program at no cost to the developers. This is huge news and means that everyone that’s part of that program, not just partners to Microsoft Games Studios, will be able to take advantage of Unity to create awesome gaming experiences for the Xbox One. On top of this, a special Xbox One version of the Unity Pro tools are also being made available for these same developers at no cost.

The Xbox One is a powerful platform and we’re building powerful tools to take advantage of all of the features that make it so special like the Kinect and SmartGlass. Production is well underway and is progressing faster than originally anticipated! Very early testing phases will begin soon with a broader beta program in 2014.


In case you have never heard of it, ID@Xbox is Microsoft’s independent developer publishing program.  Of key importance, is probably this piece from the ID@XBox FAQ about who can qualify:

Of course, we’ll be evaluating each developer application individually on its own merits, but in the initial phase of ID@Xbox, we are looking for professional independent game developers who have a proven track record of shipping games on console, PC, mobile, or tablet. We want to ensure your success in your development effort on Xbox One. Developing and publishing a console game is not trivial!

Our longer term plan is that anyone with a retail Xbox One will be able to develop, publish, and sell their game on Xbox Live.


So in a nutshell, you can now get a version of Unity for free supporting Xbox One functionality, including Smartglass, the controller and Kinect if you are a member of ID@Xbox.

Why bittersweet?  This essentially means that chances for an XNA successor is pretty much zero.  Increasingly this means alternatives to Unity are becoming increasingly rare.  Additionally pure hobbyist developers are left in the lurch for now.  Got an Xbox One and want to just play around making games?  You can’t for now.  Again from the FAQ:

Can I use my retail kit as a development kit?

As part of our vision for enabling everyone with an Xbox One to be a creator, we absolutely intend to enable people to develop games using their retail kits. Right now, though, you still need a development kit! We provide two kits to everyone in the registered developer program. Additional kits, if needed, can be purchased.

Bummer, at least for now.


So, if you are an established indie developer, or more specifically an established indie developer working in Unity, this is amazingly good news.  If you however are a hobbyist, especially one hoping for another XNA like SDK for Xbox One this certainly isn’t.  Of course this isn’t to say Microsoft won’t be creating another XNA like development kit, but given this news, I highly doubt it.  They’ve effectively outsourced it to Unity.

3. November 2013

There was a new LibGDX release today, new features include:

  • 3D API.  This one has been in the works for some time and brings 3D to LibGDX built over OpenGL 2.0 ES.  Click here for more information on 3D support.
  • iOS back end moved from Xamarin’s MonoTouch to ROBOVM.  No more $300 charge to support iOS!
  • Updates to LWJGL, box2D and Bullet Physics libraries to the latest stable releases.
  • Android x86 support.  Beyond the contest not sure the win here.  Faster emulation?
  • LibGDX added to maven ( com.badlogicgames.libgdx ).
  • Gradle build option… is this one step away from the insanity that is Eclipse?  I sure hope so!
  • Small bug fixes and improvements.  See the list here.


LibGDX test of shader with skinning:

LibGDX bullet physics on iOS using ROBOVM


You can read more about the release here.

News ,

1. November 2013


Blender announced the released of Blender 2.69 and now we are going to take a quick look at what is in it for game developers.


The biggest feature on that front is the ability to import FBX files, as well as export FBX and OBJ files with split normals.  As FBX support improves, it becomes easier and easier to slot Blender into a seamless multi application workflow.


The mesh bisect tool was added for quickly cutting an object in half:

File:Mesh bisect.png


Clean-up tool added for automatically detecting and fixing holes in a mesh.

Symmetrize was re-written and now preserves UV and mesh data.

Probably the biggest new feature was the addition of Hidden Wire display mode.  With this enabled, it will only show front facing wireframe:

File:View3d shading hidden-wire.png

A number of other small modeling changes.


Plane Tracking was added to the Motion Tracker, for replacing flat surface in a scene, such as a billboard.

File:Blender2.69 MotionTracker.png


As well, a number of improvements to the Cycles renderer.


All told, not a ton new in this update.

News, Art

30. October 2013


In the previous tutorial we looked at handling touch and gesture events.  These days, most mobile devices have very accurate motion detection capabilities, which LibGDX fully supports.  In this example we will look at how to handle motion as well as detect if a device supports certain functionality and to detect which way the device is oriented.


This project revolves around a single code example, but there are some configuration steps you need to be aware of.


First off, in order to tell LibGDX that you want to use the compass and accelerometer, you need to pass that as part of the configuration in your Android MainActivity.  In the android project locate and edit it accordingly:

package com.gamefromscratch;

import android.os.Bundle;



public class MainActivity extends AndroidApplication {


    public void onCreate(Bundle savedInstanceState) {



        AndroidApplicationConfiguration cfg = new AndroidApplicationConfiguration();

        cfg.useGL20 = true;

        cfg.useAccelerometer = true;

        cfg.useCompass = true;


        initialize(new MotionDemo(), cfg);




The meaningful lines are

cfg.useAccelerometer = true;


cfg.useCompass = true;


These lines tell LibGDX to enable both.

Next we need to make a couple of changes to your Android manifest.  This is a configuration file of sorts that tells the Android operating system how your application performs and what permissions it requires to run.  You could literally write an entire book about dealing with Android manifests, so if you want more information read here.  The manifest is located at the root of your Android project and is called AndroidManifest.xml.  There are a couple ways you can edit it.  Simply right click AndroidManifest.xml and select Open With->.



I personally prefer to simply edit using the Text Editor, but if you want a more guided experience, you can select Android Manifest Editor, which brings up this window:

Java motion android AndroidManifest xml Eclipse Users Mike Dropbox Workspace

This is basically a GUI layer over top of the Android manifest.  Using the tabs across the bottom you can switch between the different categories and a corresponding form will appear.  If you click AndroidManifest.xml it will bring up a text view of the manifest.  Use whichever interface you prefer, it makes no difference in the end.

There are two changes we want to make to the manifest.  First we want the device to support rotation, so if the user rotates their device, the application rotates accordingly.  This is done by setting the property android:screenOrientation to fullsensor.  Next we want to grant the permission android.permission.VIBRATE.  If you do not add this permission calling a vibrate call will cause your application to crash!


Here is how my manifest looks with changes made:

<?xml version="1.0" encoding="utf-8"?>

<manifest xmlns:android=""



    android:versionName="1.0" >

    <uses-sdk android:minSdkVersion="5" android:targetSdkVersion="17" />

    <uses-permission android:name="android.permission.VIBRATE"/>



        android:label="@string/app_name" >







                <action android:name="android.intent.action.MAIN" />

                <category android:name="android.intent.category.LAUNCHER" />





The changes have been bolded above.  You want to be careful when you request additional permissions as they will be shown when the user installs your application.  Too many permissions and people start getting scared of your application.  Of course, if you need to do something that requires a permission there isn’t much you can do!  As to the screenOrientation value, this tells Android which direction to create your application as.  There are a number of options, Landscape and Portrait being two of the most common.  fullSensor basically means all directions supported.  This means you can rotate the device 360 degrees and it will be rotated accordingly.  On the other hand, if you select “user”, you cannot rotate the device 180 degrees, meaning you cannot use it upside down.  You can read more about the available properties in the link I provided earlier.

There is one last important thing to be aware of before moving on.  Your android project will actually have two AndroidManifest.xml files, one in the root directory another in the bin subfolder.  Be certain to use the one in the root directory, as the other one will be copied over!


Ok, now that we are fully configured, let’s jump into the code sample:

package com.gamefromscratch;

import com.badlogic.gdx.ApplicationListener;

import com.badlogic.gdx.Gdx;

import com.badlogic.gdx.Input.Orientation;

import com.badlogic.gdx.Input.Peripheral;





public class MotionDemo implements ApplicationListener {

private SpriteBatch batch;

private BitmapFont font;

private String message = "Do something already!";

private float highestY = 0.0f;


public void create() {

   batch = new SpriteBatch();

   font = new BitmapFont(Gdx.files.internal("data/arial-15.fnt"),false);




public void dispose() {





public void render() {

   int w =;

   int h =;, 1, 1, 1);;


   int deviceAngle = Gdx.input.getRotation();

   Orientation orientation = Gdx.input.getNativeOrientation();

   float accelY = Gdx.input.getAccelerometerY();

   if(accelY > highestY)

      highestY = accelY;

   message = "Device rotated to:" + Integer.toString(deviceAngle) + " degrees\n";

   message += "Device orientation is ";


      case Landscape:

         message += " landscape.\n";


      case Portrait:

         message += " portrait. \n";



         message += " complete crap!\n";




   message += "Device Resolution: " + Integer.toString(w) + "," + Integer.toString(h) + "\n";

   message += "Y axis accel: " + Float.toString(accelY) + " \n";

   message += "Highest Y value: " + Float.toString(highestY) + " \n";


      if(accelY > 7){





      message += "Azmuth:" + Float.toString(Gdx.input.getAzimuth()) + "\n";

      message += "Pitch:" + Float.toString(Gdx.input.getPitch()) + "\n";

      message += "Roll:" + Float.toString(Gdx.input.getRoll()) + "\n";



      message += "No compass available\n";


   font.drawMultiLine(batch, message, 0, h);




public void resize(int width, int height) {


   batch = new SpriteBatch();

   String resolution = Integer.toString(width) + "," + Integer.toString(height);"MJF", "Resolution changed " + resolution);



public void pause() {



public void resume() {





When you run this program on a device, you should see:



As you move the device, the various values will update.  If you raise your phone to be within about 30 degrees of completely upright it will vibrate.  Of course, this assumes that your device supports all these sensors that is!


The code itself is actually remarkably straight forward, LibGDX makes working with motion sensors remarkably easy, its just actually understanding the returned values that things get a bit more complicated.  The vast majority of the logic is in the render() method.  First we get the angle the device is rotated in.  This value is in degrees with 0 being straight in front of you parallel to your face.  One important thing to realize is this value will always have 0 as up, regardless to if you are in portrait or landscape mode.  This is something LibGDX does to make things easier for you, but is different behaviour than the Android norm.

Next we get the orientation of the device.  Orientation can be either landscape or portrait (wide screen vs tall screen).  Next we check the value of the accelerometer along the Y access using getAccelerometerY().  You can also check the accelerometer for movement in the X and Z axis using getAcceleromterX() and getAcceleromterZ() respectively.  Once again, LibGDX standardizes the axis directions, regardless to the devices orientation.  Speaking of which, Y is up.  The means if you hold your phone straight in front of you parallel to your face, the Y axis is what you would traditionally think of as up and down.  The Z axis would be in front of you, so if you made a push or pulling motion, this would be along the Z axis.  The X axis would track movements to the left and right.

So then, what exactly are the values returned by the accelerometer?  Well this part gets a bit confusing, as it measures both speed and position in a way.  If you hold your phone straight out in front of you, with the screen parallel to your face, it will return a value of 9.8.  That number should look familiar to you, it’s the speed a body falls due to gravity in meters per second.  Therefore if your phone is stationary and upright, its 9.8.  If you move the phone up parallel to your body, the value will rise above 9.8, the amount depends on how fast your are moving the phone.  Moving down on the other hand will return a value below 9.8.  If you put the phone down flat on a desk it will instead return 0. Flipping the phone upside down will instead return -9.8 if held stationary.  Obviously the same occurs along the X and Z axis, but instead that would indication motion left and right or in and out instead of up and down.

Ok, back to our code.  We check to see if the current accelY value is the highest and if it is, we record it to display.  Next we check what value the orientation returned and display the appropriate message.  We dump some information we’ve gathered out to be displayed on screen.  Next we make the very important call Gdx.input.isPeripheralAvailable().  This will return true if the users device supports the requested functionality.  First we check to see if the phone supports vibrating and if it does, we check if the phone is over 7.  Remember the value 9.8 represents straight up and down, so if its 7 or higher its within about 35 degrees of vertical.  If it is, we vibrate by calling vibrate(), the value passed is the number of milliseconds to vibrate for.

Next we check to see if the device has a compass.  If it does, you can check the position of the device relative to polar north.  Here are the descriptions of each value from Google’s documentation:

Azimuth, rotation around the Z axis (0<=azimuth<360). 0 = North, 90 = East, 180 = South, 270 = West
Pitch, rotation around X axis (-180<=pitch<=180), with positive values when the z-axis moves toward the y-axis.
Roll, rotation around Y axis (-90<=roll<=90), with positive values when the z-axis moves toward the x-axis.

You can read more about it here.

Finally we draw the message we have been composing on screen.

There is only one other very important thing to notice in this example:

public void resize(int width, int height) {


   batch = new SpriteBatch();

   String resolution = Integer.toString(width) + "," + Integer.toString(height);"MJF", "Resolution changed " + resolution);



In the resize() method we dispose of and recreate our SpriteBatch().  This is because when you change the orientation of the devices from landscape to portrait or vice versa you invalidate the sprite batch, it is now the wrong size for your device.  Therefore in the resize() call, we recreate the SpriteBatch structure.

Programming , , ,

26. October 2013

Today where I live the weather is absolutely abysmal outside right now ( think English weather, but colder ) and I have very little desire to work on any of my own projects, so I took to Safari to see what new books where released.  There’s a new title Game Development with Three.js that was just released today. ( Safari Link ).  I have long been interested in learning Three.js so I decided to check it out.  If you’ve never heard of it, Three.js NewImage s probably the most popular 3D library for WebGL development.  It provides a large range of functionality including fallback renderers if WebGL is missing, scene graph, animation, light, materials, shaders, primitives and even object loaders for most of the popular 3D applications.


So today I decided to jump in and read Game Development with Three.js.  First things first… this book is short, very short.  Just over 100 pages actually.  On the other hand, its reasonably cheap at 10$ for the Kindle version.  The print version is a slightly more “rip-offish” $25, but don’t hold that against the author… it’s the way Packt prices books… my own was priced at $17 for the Kindle version and a whopping $50 for the print version.  Apparently Packt wants to sell e-books…  always, back to the book.


It’s short, but remarkably concise.  Here is the table of contents:

  • Preface
  • Hello, Three.js
  • Building a world
  • Exploring and Interacting
  • Adding Detail
  • Design and Development


The first chapter is the obvious introductory chapter.  Setting up a development environment, configuring Three.js and simple introductory sample.  Where needed there are appropriate and useful graphics, such as visualizing the differences between orthographic and perspective rendering.  Chapter two is probably the meat of the book, it’s a crash course in Three.js, introducing geometry, lighting types, rendering etc.  The main sample from the chapter creates a cityscape using primitives.




It’s a pretty clever example to use when just working with raw primitives.  One thing the book does well is graphical tables illustrating concepts.  Here for example is a subsection of the part on the various shading options available in Three.js:




Its effective, easy to grok and clean.


The third chapter starts down the “game” part of the book, covering the non-Three.js aspects of book.  That includes keyboard and mouse handling, mouse hit detection ( ray casting ) and starts on a simple voxel based first person shooter.  This is where you create the skeleton of a game such as the game loop, a simple text based map format, movement, collision and bullets.


Chapter four is all about fleshing out the first person shooter, such as loading assets from 3D modelling applications, simple animation, particle systems, sound ( an experimental aspect of Three.js ) and rendering effects/post processing.  


The fifth chapter is a hodgepodge of topics such as optimization, network usage, level of detail, JavaScript best practices, etc.


So, what did I think of the book?  Well for my needs, a rainy afternoon time killer that introduces Three.js it did exactly that. There is a surprising amount of information jammed into just over 100 pages.  That said, for 100 page book, they left a lot out as well.  If you’ve got no prior game programming experience and need concepts like the game loop, coordinate systems, or general terms ( like UV mapping or texturing ) explained to you you should look elsewhere.  The books coverage of most topics simply isn’t that deep.  Additionally there are a few things that are absent or only just briefly covered, such as shader programming which I think is important enough to to probably merit an entire chapter of it’s own.  It does however present a complete, if simple, game to learn from, so that is certainly useful for beginners.  If you are somewhat experienced with game development and want a crash course in Three.js, this book is a very good reading… especially on a rainy day.


Month List

Popular Comments

GLFW 3.2 Released
Subscribe to GameFromScratch on YouTube Support GameFromScratch on Patreon

2. June 2016


GLFW, a library providing cross platform window and input handling functionality for OpenGL, OpenGL ES and now Vulkan applications, just released version 3.2.  The aforementioned Vulkan support is probably the biggest new feature in this release, however this release contains several new features.

From the complete change log:

  • Added glfwVulkanSupported, glfwGetRequiredInstanceExtensions, glfwGetInstanceProcAddress,glfwGetPhysicalDevicePresentationSupport and glfwCreateWindowSurface for platform independent Vulkan support
  • Added glfwSetWindowMonitor for switching between windowed and full screen modes and updating the monitor and desired video mode of full screen windows
  • Added glfwMaximizeWindow and GLFW_MAXIMIZED for window maximization
  • Added glfwFocusWindow for giving windows input focus
  • Added glfwSetWindowSizeLimits and glfwSetWindowAspectRatio for setting absolute and relative window size limits
  • Added glfwGetKeyName for querying the layout-specific name of printable keys
  • Added glfwWaitEventsTimeout for waiting for events for a set amount of time
  • Added glfwSetWindowIcon for setting the icon of a window
  • Added glfwGetTimerValue and glfwGetTimerFrequency for raw timer access
  • Added glfwSetJoystickCallback and GLFWjoystickfun for joystick connection and disconnection events
  • Added GLFW_NO_API for creating window without contexts
  • Added GLFW_INCLUDE_VULKAN for including the Vulkan header
  • Added GLFW_CONTEXT_CREATION_API, GLFW_NATIVE_CONTEXT_API and GLFW_EGL_CONTEXT_API for run-time context creation API selection
  • Added GLFW_CONTEXT_NO_ERROR context hint for GL_KHR_no_error support
  • Added GLFW_TRUE and GLFW_FALSE as client API independent boolean values
  • Added icons to examples on Windows and OS X
  • Relaxed rules for native access header macros
  • Removed dependency on external OpenGL or OpenGL ES headers
  • [Win32] Added support for Windows 8.1 per-monitor DPI
  • [Win32] Replaced winmm with XInput and DirectInput for joystick input
  • [Win32] Bugfix: Window creation would segfault if video mode setting required the system to be restarted
  • [Win32] Bugfix: MinGW import library lacked the lib prefix
  • [Win32] Bugfix: Monitor connection and disconnection events were not reported when no windows existed
  • [Win32] Bugfix: Activating or deactivating displays in software did not trigger monitor callback
  • [Win32] Bugfix: No monitors were listed on headless and VMware guest systems
  • [Win32] Bugfix: Pressing Ctrl+Pause would report GLFW_KEY_UNKNOWN
  • [Win32] Bugfix: Window size events would be reported in wrong order when restoring a full screen window
  • [Cocoa] Made joystick polling more efficient
  • [Cocoa] Removed support for OS X 10.6
  • [Cocoa] Bugfix: Full screen windows on secondary monitors were mispositioned
  • [Cocoa] Bugfix: Connecting a joystick that reports no name would segfault
  • [Cocoa] Bugfix: Modifier flags cache was not updated when window became key
  • [Cocoa] Bugfix: Dead key character composition did not work
  • [Cocoa] Bugfix: The CGL context was not released until the autorelease pool was drained by another function
  • [X11] Bugfix: Monitor connection and disconnection events were not reported
  • [X11] Bugfix: Decoding of UTF-8 text from XIM could continue past the end
  • [X11] Bugfix: An XKB structure was leaked during glfwInit
  • [X11] Bugfix: XInput2 XI_Motion events interfered with the Steam overlay
  • [POSIX] Bugfix: An unrelated TLS key could be deleted by glfwTerminate
  • [Linux] Made joystick polling more efficient
  • [WGL] Changed extension loading to only be performed once
  • [WGL] Removed dependency on external WGL headers
  • [GLX] Added glfwGetGLXWindow to query the GLXWindow of a window
  • [GLX] Replaced legacy drawable with GLXWindow
  • [GLX] Removed dependency on external GLX headers
  • [GLX] Bugfix: NetBSD does not provide
  • [EGL] Added _GLFW_USE_EGLPLATFORM_H configuration macro for controlling whether to use an existing EGL/eglplatform.hheader
  • [EGL] Added and documented test for if the context is current on the calling thread during buffer swap
  • [EGL] Removed dependency on external EGL headers

GameDev News

blog comments powered by Disqus

Month List

Popular Comments