Subscribe to GameFromScratch on YouTube Support GameFromScratch on Patreon

17. June 2012

 

In this tutorial we are going to look at adding keyboard and mouse support to your cocos2D application.  cocos2D was traditionally used on the iPhone, so keyboard and mouse support is very touch centric, so if you are used to desktop development and things seem a bit strange, this is why.

Alright, let’s jump right in.  We are going to create a simple game with our jet sprite from tutorial 3.  The big difference is it will respond to your mouse and keyboard activity.  Clone your previous project, this time I am calling it MyThirdApp.  Yeah, tons of thought went into these names! 

 

Now create a new file in the Classes directory named JetSprite.js. Inside, write the following code:

 

var JetSprite = cc.Sprite.extend({
    _currentRotation:0,
    ctor:function(){
        this.initWithFile("images/jet.png");
    },
    update:function(dt){
        this.setRotation(this._currentRotation);
    },
    handleKey:function(e)
    {
        if(e === cc.KEY.left)
        {
            this._currentRotation--;

        }
        else if(e === cc.KEY.right)
            this._currentRotation++;

        if(this._currentRotation < 0) this._currentRotation = 360;
        if(this._currentRotation > 360) this._currentRotation = 0;
    },
    handleTouch:function(touchLocation)
    {
        if(touchLocation.x < 300)
            this._currentRotation = 0;
        else
            this._currentRotation = 180;
    },
    handleTouchMove:function(touchLocation){
        // Gross use of hardcoded width,height params.
        var angle = Math.atan2(touchLocation.x-300,touchLocation.y-300);

        angle = angle * (180/Math.PI);
        this._currentRotation = angle;

    }
});

 

Here we are deriving our new class JetSprite from cc.Sprite using the extend() function.  During the constructor, we first call back to cc.Sprite’s constructor, then load our sprite from the file jet.png located in the images folder.  ( If you like me started your project by cloning MySecondApp, it should already be there, otherwise make sure you have an image there ).

 

The update() function is going to be called once every frame, this will make sense later.  Essentially this is where you put your object’s logic.  In this case we are simply rotating our sprite to the value in _currentRotationhandleKey(), handleTouch() and handleTouchMove() are three user defined functions that are going to be used to pass I/O events into our JetSprite object.

 

handleKey() is passed the key value ( it’s a number ) of the currently pressed key.  cocos2D defines a number of key values to represent each key in the cc.KEY value.  If you want more details check out CCKeyboardDispatcher.js in the cocos2D directory for more details.  In the event the left arrow key is held down, we decrement our rotation value, if the right arrow key is down we increment the rotation value.  Finally, if we roll over 360 or under 0 degrees, we update accordingly.  Effectively this rotates our sprite left and right on arrow key press.

 

handleTouch() is very simple.  It is passed in the touchLocation, which is an x,y value representing where on screen was touched ( or in our case, clicked ).  If its on the left half the screen, we point up ( 0 degrees ), if the touch occurred on the right half of the screen, we rotate to straight down ( 180 degrees ).  One limitation of cocos2D seems to be the lack of ability to handle right clicks, but this may just be an oversight on my behalf.  Right now, a “touch” is any mouse button click.

 

handleTouchMove() is passed in the x,y coordinates of the mouse’s currently location.  As you can tell by the comment, I rather crudely used a hardcoded value for representing half the screen width ( 300 ), obviously in non-demo code you wouldn’t do this!  What we want to do here is figure out the angle of the mouse relative to the center of our sprite ( which is located at 300,300 ).  This is done by taking the atan2 of the distance of the mouse’s coordinate from sprite’s center.  We then convert that value ( atan2 returns a value in radians ) to degrees.  Finally, we take our calculated angle and apply it as our rotation.  Essentially this will cause our sprite to rotate relative to the mouse location.

 

Now lets take a look at the heart of our application.  Create a new script file ( or rename MySecondApp.js ) called MyThirdApp.js and enter the following code:

 

var MyThirdApp = cc.LayerColor.extend(
{   _jetSprite:null,
    init:function(){
        this._super();
        this.initWithColor(new cc.Color4B(0,0,0,255));
        var size = cc.Director.getInstance().getWinSize();

        this._jetSprite = new JetSprite();
        this.setTouchEnabled(true);
        this.setKeyboardEnabled(true);

        this.setPosition(new cc.Point(0,0));

        this.addChild(this._jetSprite);
        this._jetSprite.setPosition(new cc.Point(size.width/2,size.height/2));
        this._jetSprite.scheduleUpdate();
        this.schedule(this.update);

        return true;
    },
    onEnter:function(){
        this._super();
    },
    update:function(dt){
    },
    onTouchesEnded:function (pTouch,pEvent){
        this._jetSprite.handleTouch(pTouch[0].getLocation());
    },
    onTouchesMoved:function(pTouch,pEvent){
        this._jetSprite.handleTouchMove(pTouch[0].getLocation());
    },
    onKeyUp:function(e){

    },
    onKeyDown:function(e){
        this._jetSprite.handleKey(e);
    }
});



MyThirdAppScene = cc.Scene.extend({
    onEnter:function(){
        this._super();
        var layer = new MyThirdApp();
        layer.init();
        this.addChild(layer);
    }
})

 

The bulk of this code is very similar to previous code with one major difference.  In previous examples, we created a layer, then added a color layer to it.  In this case, we simply extend from LayerColor and add our children directly to ourselves.  There is very little functional differences between the two approaches, except this one is a bit more concise.  Now let’s take a closer look at our code.

 

In our init() method, we init with a black background colour.  We then create our JetSprite object and tell cocos2D that our layer handles touch and keypad events.  In cocos2D, input is handled at the layer level, and setting these two values will result in touch and keyboard delegates being added to our object so we can handle IO events.  We then position our layer, add our _jetSprite as a child positioned in the center of our layer.  The scheduleUpdate() call is very important.  This will result in our JetSprite’s update() function getting called every frame.  If you don’t call this method, or sprite wont do anything!

 

Next up is onEnter() which is what is called when a layer becomes the active layer.  In our case, this call is very important, as it ( in cc.Layer.onEnter ) is where the touch and keyboard delegates are actually attached to our layer!  If you remove the call to this._super(), you will not receive any keyboard and mouse events!

 

Now that we are wired up to receive touch and keyboard events, we need a couple methods that will be called when they occur.  In our case we are going to use ccTouchesEnded, ccTouchesMoved, keyUp and keyDown.  When an event occurs, these methods will be called, which in turn simply call the appropriate method on our JetSprite.

 

There are two things to be aware of here.  There are ccTouch____ and ccTouches____ versions of events; to my experience, the ccTouch____ version will never be called!  If you want to handle touch/mouse events, use the ccTouches version.  The next is the way we are handling key presses,  right now since we used keyDown, holding down the key will result in it being called over and over.  If however you want to respond only once per key hit ( regardless to if it is held down or not ), use keyUp instead.

 

We also need to make sure our new classes are included in our project.  Just like the last tutorial, make the following changes to the bottom of cocos2d.js

appFiles:['MyThirdApp.js','JetSprite.js']

 

We also need to update the Scene name in main.js

var myApp = new cocos2dApp(MyThirdAppScene);

Here is the end result of all of our code.  Mouse over the canvas and the mouse will follow you.  Left and right arrow will rotate the sprite, while clicking (either button) on the left half of the screen will point the jet up, while clicking the right half will point the sprite down.

 

 


 

If you are running Chrome be aware the clicking doesn’t appear to work.  It is actually working, the sprite is being rotated.  However it is also triggering a mouse move event, immediately snapping the sprite back to where you mouse pointer is! Smile

 

As always, you can download the whole project right here.  It includes a zip of the cocos2D version I used, in case you have trouble working with the current release.

 

 

Read Tutorial 5

Programming , ,

15. June 2012

 

I have recently become more and more interesting in HTML5 game development.  However,HTML5_Logo_256 upon initial investigation I realized there are an absolute ton of libraries and tools out there.  Some of them are amazing, others… not so much.  During my research I put together notes on the key libraries, tools, books, frameworks and IDEs used for HTML5 development.  This is the results of that research.

 

So, if you are looking to get into HTML5 game development, this page could be a very good place to start.

 

This list is by no means complete, it just represents the best of what I have found so far.  If you have recommendations for inclusion ( especially for the links section! ), please let me have them!  There are links as well to non-game related frameworks like Backbone.js, YUI, Mustache, etc. I am intending to do some game tool development in JavaScript for an upcoming RPG project, so those are there as much for my use as anything.

 

I hope you find it handy.

 

Links of interest for HTML5 game developers.

Programming ,

14. June 2012

 

Set to be published in August, this new book Unity 3.x Scripting has been added to the Unity book193D book round up.

 

In the publishers own words:

  • Make your characters interact with buttons and program triggered action sequences
  • Create custom characters and code dynamic objects and players’ interaction with them
  • Synchronize movement of character and environmental objects
  • Add and control animations to new and existing characters
  • Written in simple and step-by-step format with real life examples, this book is the only one in the market to focus on Unity Scripting

 

As I aim to keep the round-up of books as accurate as possible, if I have missed a book, please let me know!

General ,

13. June 2012

 

In this tutorial we are going to look at the various Gesture detectors built into the UI layer of the PlayStation Mobile SDK.  They all work fairly consistently, so this tutorial is going to be pretty light in explanation. 

 

There are 6 gesture detectors built in to the PS SDK; FlickGestureDetector, DragGestureDetector, TapGestureDetector, LongPressGestureDetector, PinchGestureDetector and DoubleTapGestureDetector.  DoubleTapGestureDetector barely works on the simulator and doesn’t work at all on the actual Vita, so we aren’t going to use it. 

 

Let’s just right in to coding, in this tutorial we are going to create an ImageBox UI widget that you can use with all these various GestureDetectors to flick, scale and drag all over the screen.  Here is the code:

 

using System; using System.Collections.Generic; using Sce.Pss.Core; using Sce.Pss.Core.Environment; using Sce.Pss.Core.Graphics; using Sce.Pss.Core.Input; using Sce.Pss.HighLevel.UI; namespace UIEffects { public class AppMain { private static GraphicsContext _graphics; private static ImageBox _imageBox; private static Label _label; private static FlickGestureDetector _flickGestureDetector; private static DragGestureDetector _dragGestureDetector; private static TapGestureDetector _tapGestureDector; private static LongPressGestureDetector _longPressGestureDetector; private static PinchGestureDetector _pinchGestureDetector; static private bool quit = false; private static float _origWidth; private static float _origHeight; private static float _origX; private static float _origY; // Note, no housecleaning, all event handlers leak! public static void Main(string[] args) { _graphics = new GraphicsContext(); UISystem.Initialize(_graphics); Sce.Pss.HighLevel.UI.Panel dialog = new Panel(); dialog.Width = _graphics.GetViewport().Width; dialog.Height = _graphics.GetViewport().Height; _label = new Label(); _label.Width = dialog.Width; _label.Height = 35; _label.Font = new Sce.Pss.Core.Imaging.Font(Sce.Pss.Core.Imaging.FontAlias.System,32,Sce.Pss.Core.Imaging.FontStyle.Regular); _label.SetPosition(0,0); _label.HorizontalAlignment = HorizontalAlignment.Center; _label.TextColor = new UIColor(0,0,0,255); _imageBox = new Sce.Pss.HighLevel.UI.ImageBox(); _imageBox.Image = new ImageAsset("/Application/image.png"); _imageBox.PivotType = PivotType.MiddleCenter; _imageBox.Width = _imageBox.Image.Width;///2; _imageBox.Height = _imageBox.Image.Height;///2; _imageBox.SetPosition(dialog.Width/2,//-_imageBox.Width/2, dialog.Height/2);///2-_imageBox.Height/2); _origWidth = _imageBox.Width; _origHeight = _imageBox.Height; _origX = _imageBox.X; _origY = _imageBox.Y; _flickGestureDetector = new FlickGestureDetector(); _flickGestureDetector.Direction = FlickDirection.Horizontal; _flickGestureDetector.FlickDetected += delegate(object sender, FlickEventArgs e) { if(e.Speed.X < 0) // Left new MoveEffect(_imageBox,800f,0f,_imageBox.Y,MoveEffectInterpolator.Elastic).Start (); else new MoveEffect(_imageBox,800f,dialog.Width,_imageBox.Y,MoveEffectInterpolator.Elastic).Start (); _label.Text = "Flicked"; }; _dragGestureDetector = new DragGestureDetector(); _dragGestureDetector.DragDetected += delegate(object sender, DragEventArgs e) { var newPos = new Vector2(e.Source.X,e.Source.Y) + e.Distance; e.Source.SetPosition(newPos.X,newPos.Y); _label.Text = "Dragged"; }; _tapGestureDector = new TapGestureDetector(); _tapGestureDector.TapDetected += delegate(object sender, TapEventArgs e) { e.Source.SetPosition(dialog.Width/2,dialog.Height/2); _label.Text = "Tapped"; }; _longPressGestureDetector = new LongPressGestureDetector(); _longPressGestureDetector.MinPressDuration = 3000f; _longPressGestureDetector.LongPressDetected += delegate(object sender, LongPressEventArgs e) { MessageDialog.CreateAndShow(MessageDialogStyle.Ok,"Press results", "Hey, let go of me! " + (e.ElapsedTime/1000).ToString() + " seconds elapsed!"); _label.Text = "Long pressed"; }; _pinchGestureDetector = new PinchGestureDetector(); _pinchGestureDetector.PinchDetected += delegate(object sender, PinchEventArgs e) { Vector2 newSize = new Vector2(e.Source.Width * e.Scale,e.Source.Height * e.Scale); e.Source.SetSize(newSize.X,newSize.Y); _label.Text = "Pinched"; }; _imageBox.AddGestureDetector(_flickGestureDetector); _imageBox.AddGestureDetector(_dragGestureDetector); _imageBox.AddGestureDetector(_tapGestureDector); _imageBox.AddGestureDetector(_longPressGestureDetector); _imageBox.AddGestureDetector(_pinchGestureDetector); dialog.AddChildLast(_imageBox); dialog.AddChildLast(_label); Sce.Pss.HighLevel.UI.Scene scene = new Sce.Pss.HighLevel.UI.Scene(); scene.RootWidget.AddChildLast(dialog); UISystem.SetScene(scene,new SlideTransition(2000.0f,FourWayDirection.Left,MoveTarget.NextScene,SlideTransitionInterpolator.Linear)); while(!quit) { _graphics.SetClearColor(255,255,255,0); _graphics.Clear(); if((GamePad.GetData(0).Buttons & GamePadButtons.Cross) == GamePadButtons.Cross) { //Reset _imageBox.SetSize(_origWidth,_origHeight); _imageBox.SetPosition(_origX,_origY); } if((GamePad.GetData(0).Buttons & GamePadButtons.Triangle) == GamePadButtons.Triangle) { quit = true; } UISystem.Update(Touch.GetData(0)); UISystem.Render(); _graphics.SwapBuffers(); } } } }

 

 

First we create our Graphics context and initialize our UISystem, then create a Panel named dialog that is going to be our active window.  We then create a label for output purposes, as well as an ImageBox that is going to contain our Jet sprite ( oh yeah… add an image to your project Smile I am reusing the Jet sprite from the last tutorial but you can use anything ).  The Image property of our ImageBox contains an ImageAsset, which we construct by simply passing it the filename of the graphic we want to use.  Next we store the original location of the _imageBox, so we can revert back to defaults later if desired.

 

The first gesture detector we are going to look at is the FlickGestureDetector.  This controls rapidly sliding your finger in one direction on the screen… think Angry birds.  The heart of the flick handling is the FlickDetected delegate, repeated here:

 

_flickGestureDetector.FlickDetected += delegate(object sender, FlickEventArgs e) { if(e.Speed.X < 0) // Left new MoveEffect(_imageBox,800f,0f,_imageBox.Y,MoveEffectInterpolator.Elastic).Start (); else new MoveEffect(_imageBox,800f,dialog.Width,_imageBox.Y,MoveEffectInterpolator.Elastic).Start (); _label.Text = "Flicked"; };

 

All we are doing here is detecting if the speed is a positive or negative value.  If it is a negative value, it means we are flicking right to left, otherwise it is left to right.  Obviously if you were interested in velocity, you would take the actual value into account.  We indicated with the line:

    _flickGestureDetector.Direction = FlickDirection.Horizontal;

that we are only interested in flicking along the horizontal axis.  Therefore any vertical flicks are going to be ignored. 

 

The direction simply determines what the Y coordinate is going to be in our MoveEffect call, either 0 ( far left ) or dialog.Width ( far right ).  MoveEffect causes the passed widget ( _imageBox ), to be moved over time (800ms in this case) to the passed coordinates.  The MoveEffectInterpolator determines how the move is going to happen, we choose Elastic which will cause it to have a bit of a spring effect.  For no effect at all, choose the Linear interpolator.  Essentially, if we flick left, we send the jet to the left side of the screen, while if we flick right, we send the jet to the right side of the screen.

 

Next up is the DragGestureDetector.  Dragging is when you tap, hold, then move the widget.  If we detect a drag, we simply want to translate our _imageBox location to the position the user dragged to.  We accomplish this with this code:

 

_dragGestureDetector = new DragGestureDetector(); _dragGestureDetector.DragDetected += delegate(object sender, DragEventArgs e) { var newPos = new Vector2(e.Source.X,e.Source.Y) + e.Distance; e.Source.SetPosition(newPos.X,newPos.Y); _label.Text = "Dragged"; };

 

As you can see, the various Gesture detectors use remarkably similar handlers.  In this case all we do is get the current position of the dragged widget ( the value in e.Source will be the widget being acted upon ) and add the distance it’s been dragged.

 

Now we add a TapGestureDetector.  In the event of a tap, we simply want to relocated the sprite back to the center of the screen.  Here is the code:

 

_tapGestureDector = new TapGestureDetector(); _tapGestureDector.TapDetected += delegate(object sender, TapEventArgs e) { e.Source.SetPosition(dialog.Width/2,dialog.Height/2); _label.Text = "Tapped"; };

 

Nothing really new there.  Again, e.Source represents the Widget object that is being tapped.  When a tap occurs we simply set it’s location to the center of the screen and log the tapped activity.

 

Next up is the LongPressGestureDetector.  A long press is a tap, followed by a hold for a given threshold of time, in our case 3000 milliseconds.  Once the long press occurs and hits the threshold, the delegate is fired.

 

_longPressGestureDetector = new LongPressGestureDetector(); _longPressGestureDetector.MinPressDuration = 3000f; _longPressGestureDetector.LongPressDetected += delegate(object sender, LongPressEventArgs e) { MessageDialog.CreateAndShow(MessageDialogStyle.Ok,"Press results", "Hey, let go of me! " + (e.ElapsedTime/1000).ToString() + " seconds elapsed!"); _label.Text = "Long pressed"; };

 

In the delegate, all we do is pop up a MessageDialog window.  There are two ways of working with MessageDialog, you can create one directly and then show it, or like we’ve done here, you can use the static CreateAndShow method.

 

Finally we have the PinchGesture Detector.  A pinch is a two finger concurrent tap, followed by either sliding your fingers closer together, or further apart.  Ultimately the value of most importance is the e.Scale, which determines if they pinched larger ( out ) or smaller ( in ) and how much.  By multiplying the e.Scale value against the widget’s current size, it will either scale larger or smaller.

 

_pinchGestureDetector.PinchDetected += delegate(object sender, PinchEventArgs e) { Vector2 newSize = new Vector2(e.Source.Width * e.Scale,e.Source.Height * e.Scale); e.Source.SetSize(newSize.X,newSize.Y); _label.Text = "Pinched"; };

 

One really confusing thing I ran into is the PinchGestureDetector’s PinchEndDetector is never fired!  Be aware of this; although truth is, you don’t really need it in the end.  As I mentioned at the beginning, there is also a DoubleTapGestureDetector, but it doesn’t work.

 

Finally, we wire all of these GestureDetectors to our ImageBox widget using this code:

_imageBox.AddGestureDetector(_flickGestureDetector); _imageBox.AddGestureDetector(_dragGestureDetector); _imageBox.AddGestureDetector(_tapGestureDector); _imageBox.AddGestureDetector(_longPressGestureDetector); _imageBox.AddGestureDetector(_pinchGestureDetector);

 

You can have multiple widgets using the same gestures, and as you can see in this example, a single widget can have multiple detectors attached.  The remaining code is all stuff we have covered in prior tutorials.

 

Here is the result in action.  Since the Simulator doesn’t support multitouch, I took a video of my Vita in action.

 

Pinching, zooming, tapping, dragging and pressing on the PlayStation Vita with the PSSDK

 

 

Once again, you can download the full project here.

Programming , , ,

9. June 2012

 

As I have been working with Cocos2D-html ( and PlayStation GameEngine2D which is based on cocos2D ) lately, I decided to take a look at what the cocos2D book situation was like.  While about half as large as my Unity 3D book round-up there is still a nice, growing collection of cocos2D related books.

 

The cocos2D book round-up is a collection of every currently published book on the subject of cocos2D.  It contains all of the details you may want to know when selecting a book ( author, page count, rating, links to Amazon, Safari Books Online and the publisher, publish date, table of contents, publisher description, etc ) all on a single page.  So if you are in the market for a book on cocos2D, or just want to see what all is available, start here.

 

I am to be as comprehensive as possible, so if I’ve missed a book on the subject, or made a mistake, please let me know!

General ,

Month List

Popular Comments

My next project… finally, a game!
Subscribe to GameFromScratch on YouTube Support GameFromScratch on Patreon


4. April 2013

Now that my book is published it's time to move on to my next project.  Given the original purpose of this site, with a two year distraction, I think it's high time that I get around to actually creating and publishing a game!  I intend to share as much of the development process, code, assets, etc… as I go, including having the WIP game playable here!

 

What I have in my head is a science fiction business simulation game.  Trust me, the actual game is a hell of a lot more exciting than that title indicates, I will flesh out the details in further blog posts.  If you played old school BBS games, or many of the recent Kairosoft or Tycoon style games, you should have an inkling of the type of game I am considering.  I actually tend to find these styles of turn based games are the best fit for mobile games, especially in the world of play 5 minutes here, 5 minutes there.

 

One of the biggest requirements in creating a game is being able to post about it here.  I want to share as much of the development experience as possible, and hopefully be as interesting to as large an audience as possible.  Ultimately I do intend to sell the game on at least iOS and Android, but I would also like to give it away here.  I would also like to make all of the source, and eventually ( once I've hopefully sold a few copies ) the assets available as I go.

 

So here are my current requirements:

  • Turn based game
  • May require modest 3D support, otherwise sprite based
  • Excellent UI support ( UI heavy game )
  • Available on Android, iOS and on GameFromScratch.com at a minimum
  • In a language of interest or known to a large number of developers
  • Tools at low or no cost, so maximum number of readers can follow along
  • Reasonably quick development time
  • Good library support, little desire to re-invent the wheel

 

For the last week I've been mulling these requirements around in my head and I think I've settled on my technology of choice.  First I will go through my list of considered but discounted technologies and the reasons why.

LoomScript I recently finished evaluating Loom and I was rather impressed.  There is CSS style markup and tweening support for making a solid UI relatively easily, it targets all the desired platforms, except the web ( MacOS and Windows builds might offset this lacking to a degree ).  That said though, at the end of the day, it's just not that popular yet, so I would be writing about a technology a very small minority of you are interested in.

Unity is an obvious choice.  It would handle the 3d portions better than any other engine i've considered and it is certainly popular, so would probably be relevant for a lot of you.  At the end of the day though, it's a 3D engine, not a 2D one.  Of course, you can accomplish a great deal using 3rd party libraries, for an additional cost.  At the end of the day, that was the biggest mark against Unity, the cost factor.  Although I own a 3.x license from the give away, and could ultimately buy a 4.x license when ready to ship, I don't want to go that route if I don't really need the majority of the features it buys me.  The UI support too is also remarkably sparse.

Scaleform was another one I considered.  On the UI side of the fence, it may be the strongest option.  It's basically a game optimized version of Flash and is used in hundreds of games to provide the UI functionality.  That said, it's pretty niche in the indie world at this point and the amount of supporting materials is pretty light right now.

C++, very little gain.  Would love to take a closer look at the GamePlay3D engine, but it's overkill for this game project.

Flash/AIR is another option.  In all my years, I somehow completely avoided doing any Flash development.  Keep in mind, I am not talking about browser based Flash, I am talking Flash as a development environment.  I haven't completely disregarded Flash as an option, but puzzling out the eco-system was rather confusing.  Not really sure what level of commitment would be need to be made to work with Flash.  I do really wish FlashDevelop was available on MacOS…  Not ruled out Flash yet, but probably not.

The LUAs. I really like all of the Lua based game engines but none are a particularly good fit.  I don't believe any of them support web deployment right now, Corona and Gideros both have a price tag attached and Moai is a poor fit… I am not writing a high performance game, so there aren't enough wins to make it worth while, as I'd have to roll my own UI system, etc.

LibGDX/PlayN Both support a good number of platforms, both are libraries of interest to me.  In the case of PlayN, I just found the entire build process ( MAVEN!!! ) so tedious and prone to breakage, plus I would be pretty much rolling my own UI.  LibGDX is on my todo to look into further and hasn't been discounted completely, but UI support is there.

HaXe + NME Still looking into this one, it's a maybe.

 

At the end of the day, that leaves an obvious choice and two maybes.

 

HTML5 It's popular, library support is staggering, UI support is a no brainer ( I can use a few thousand existing HTML libraries ), hosting it on this website is also a no brainer, I can easily port to Windows 8 and Chrome app stores and frankly, I want to see what the HTML5 developer experience is like on a full game.  There is one possible achilles heel to this decision though… native support.  I want to deploy as a native app at the end of the day, and there are options for this ( XDK, CocoonJS, PhoneGap ), but there are also problems.  WebGL support is I believe only available on Cocoon and it's "coming soon".  The game in my mind isn't performance intensive, but I don't know if native WebView is fast enough, probably rendering PhoneGap a non-starter… I need to (very very very soon) try the various native deployment options and see how it goes.

 

That said, if HTML5 doesn't work out, I've got two runners up picked out.  HaXe and libGDX.

 

Is there any mainstream language you would prefer I do this project in?  Anyways… wish me luck, first steps and all of that...

News, Programming ,

blog comments powered by Disqus

Month List

Popular Comments