Creating dynamically equipped characters in 2D and 3D games

9. January 2014

 

Right off the hop for my new game I have to deal with a problem that many games have to deal with but one that could have a huge impact on how my game is structured, so I am going to address it with a proof of concept right up front.   That thing, as the title probably spoils, is character customization.

 

As I said earlier, this is a pretty common problem in games and something you see questions about quite often if you hang around game development forums for any period of time.  So what do I mean with character customization?  Ever played Diablo or a similar title?  When you change your gear in Diablo, your character is updated on screen.  This behaviour is pretty much expected in this day and age… personally I really hate when I am playing an RPG and the weapon I equip has no affect on my avatar.  That said, it is certainly a non-trivial problem and one that could have a pretty big impact on your game.

 

Let’s start by taking a look at the various options… at least the ones I can think of.

 

2D - Pre-render all possible configurations

Of course, brute forcing it is always a possibility and in a few cases may actually be ideal.  Basically for a 2D game, you simply create every combination of sprite for every combination of equipment.  So if you have an animation sequence for “walk_left” and “walk_right”, you would have to create a version for each possible equipable item.  So you might have “walk_left_with_axe” and “walk_left_with_sword” sprite sheets for the different weapon combinations.  If you are rendering your spritesheets automatically from a 3D application, this could actually be the easiest approach.  Of course there are some seriously massive downsides to this approach.  Memory usage quickly becomes outrageous.  For each piece of equipment you double the size of your original sprite sheet for example.  Once you start adding more and more options, this approach quickly becomes unfeasible.  If you only have a very small amount of customization ( two or three weapon options ), this approach may appeal due to it’s simplicity.  With anything more than a couple combinations though, this approach is pretty much useless.

 

2D – Pre-rendered spritesheet + pre-rendered weapon animation frames.

You can use a regular sprite sheet and simply keep the weapons separate.  So basically you have two separate sprite sheets, one of the character without weapons, one with just the weapons.  Then at run-time you combine them.   Consider these two sprite sheets ( taken from a StackOverflow question ).

Animated Character:

Player SpriteSheet

Weapon:

Weapon SpriteSheet

Then at run-time you combine the two sprites together depending on what is equipped.

In fact, if your weapon isn’t animated ( as in, it doesn’t move, like a sword, as opposed to a weapon like a whip or nunchucks ) you can get by using a hard point system, where you define the mount position of the weapon ( for example, the players right hand ) and orientation.  Basically for each frame of animation, you provide an x and y coordinate as well as rotation amount that represents where and how the weapon should be mounted.  Then at runtime you combine the two sprites, positioning and rotating the weapon sprite relative to the mount/hardpoint. This way, you only need a single image per equipable weapon.

This approach has a couple downsides.  First, you need a separate data file to hold mount point data and possibly a tool to generate it.  Second, it makes bounding/hit detection more difficult as the mounted weapon may exceed the dimensions of the originating sprite.  Third, you need to deal with z-depth issues, you cant simply paste the weapon on top of the sprite, as what happens when the character is walking right but carrying the weapon in the left hand?  To handle this your workflow needs to basically be: render obscured weapons, render character sprite, render foreground weapons.  Finally, weapons with their own animation, like a whip snapping, would require even more effort.

All told, this is probably the easiest while still flexible system for 2D animation.  It requires a bit more work than simply brute forcing your sprite sheets, but the payoffs in reduced size and complexity are well worth it, especially if you have more than a couple items.  One other big win about this approach is then weapons can be re-used across different sprites, a huge win!  Then you can model a single sword and it will be available to all sprite sheets with hard/mount point information available.

 

3D - Bone/hardpoint system

If your game is pure 3D this is often the way you want to go.  If you character is a knight for example, in your 3D modelling application you create named hard points in your model, either as bones or as an object of some form such as a null object, point or invisible box.  Then at runtime you bind other models to those locations.  So for example, if you equip a long sword and shield, your code searches for the “right_arm_shield” bone and uses it as the parent object for a shield 3D model.  Then you bind a sword to say… the “left_arm_hand” bone.  You can see an example of this behaviour in this UDK example.

This approach is basically identical to the last 2D method mentioned except in 3D and that you dont generally need to define your own hardpoint system, as most 3D packages and engines already support it out of the box, either by binding to a bone or by accessing and parenting 3D objects to named locations.  Like working in 2D, then your weapon and character are two separate entities, like so:

3P_Weapon_Attach.JPG

One of the major advantages of 3D over 2D is flexibility.  Most animations in 2D need to be pre-calculated, while in 3D you have a great deal more flexibility with the computer filling in the blanks for you.  Unlike in 2D, additional 3D animations add very little to the data file sizes.

 

2D Bone based animation

One of the major downsides to 2D animation is, as you add more animations, it takes up more space.  People have been working on solutions, one of the most common of which is 2D bone animation.  How exactly can you use bones on a 2D image file you ask?  That’s a good question!  In 3D, its pretty simple… the motion/influence of the bone transforms the attached geometry.  In 2D, there is no geometry, so how can you do this?

Well, the simple version is, you cut your bitmap or vector image up into body parts.  So instead of a single image of an animated character, you instead of an image for the foot, lower leg, upper leg, torso, waist, shoulder, left upper arm, etc…  Then you can draw 2D bones that control the animation of this hierarchy of images.  There are applications/libraries that support this, making it mostly transparent for you.  Here is one such example from Spine from Esoteric Software.

 

The character:

Creating Bones

 

It looks like a single image ( and you can see a couple bones being drawn in the foot/leg area ), but it is actually composed of a few dozen source images:

Set Parent

 

Basically the workflow goes, you create your traditional image in your regular art program, except instead of creating a single image, you create dozens of hopefully seamless ones representing each body part.  You stitch them together, setting the drawing order ( what’s covering what ), then define bones to create animations.  This has the double advantage of massively decreasing space requirements while simplifying the animation process.  The downsides are creating your initial image are a great deal more difficult and of course, you need to buy ( or roll your own ) bone system, such as Spine. 

 

Spine is not the only option, there is also Spriter from BrashSoftware and probably others I am unaware of.  Both of these are commercial packages, but then, both are dirt cheap ( IMHO ) with price tags between $25 and $250 dollars, with $60 and $25 dollars being the norm with each package respectively.

The nice thing about taking this segmented bone approach… you basically get the ability to swap components or bind weapons for free.

 

2.5D hybrid approach

2 1/2D is becoming more and more popular these days.  Essentially a 2.5D game is a 3D game with 2D assets ( such as 2D sprites over a 3D background ), but more commonly its 3D sprites over a 2D background.  How does this work?

Well there are two basic approaches.  You model, texture, animate your 3D model as per normal.  Any character customization you perform you would do as if you were working in full 3D.  Now is where the two different approaches vary, depending on performance, the hardware you are running on, etc…  You can either generate a sprite sheet dynamically.  Essentially when your character is loaded or you change equipment, in the background your game renders a sprite sheet of your updated character containing every animation frame, then you use this sheet as if it was a normal 2D sprite sheet.  The other option is, you render your character each time it moves.  Creating a spritesheet consumes more memory, but very little GPU or CPU demand.  Generating your characters current animation frame dynamically on the other hand, takes almost no memory, but at a much higher CPU/GPU burden.  Which is right for your game is obviously up to you.

Of course the 2.5D approach certainly has it’s share of problems too.  First off, it is the most complicated of the bunch.  Second, you cant do any “by hand” calculations on your generated sprite/spritesheets, as they are now being rendered dynamically.  This means no ability to create pre-calculated bounding volumes for example.  On the other hand, the 2.5D approach gives you most of the advantages of 3D ( compact animation data structure, easy programmable modification, ability to alter texture mapping, lighting effects, etc… ) that 3D does, but with the simplicity of dealing with a traditional 2D world.

One animation to rule them all!

You may be thinking WAHOO… I can cut my work way down now!  All I need to do is animate an attack sequence and then I can simply substitute the different weapon images.  Not so fast I am sorry to say.  Think about it this way…  are the images for swinging an AXE and a Sword the same?  What about a pole arm?  You are still going to have several different animations to support different classes of weapons, but these techniques will still vastly reduce the amount of data and work required while allowing a ton of customization.

 

 

So then, what am I going to do?

So, which approach am I taking?  Ultimately I am going to require a great deal of customization, going far beyond the examples described above.  One such example scenario is a Mech with a customizable left weapon… this weapon could be an arm holding a gun, a turret containing one or more guns, or possibly even a large pack of rockets.  Each “weapon” could have it’s own animation set.  Oh, and I am much more proficient in 3D than 2D.  On the other hand, the game itself is a 2D game and I would prefer to keep it that way for the simplicity that brings in a number of areas ( level creation tools, path-finding and other AI tasks, physics, cheating at art, etc… ).  As a result, I am going to try the 2.5D approach, where the mech is created and animated in 3D, dynamically equipped, then rendered to 2D.  I am going to try both pre-rendering sprite sheets and dynamically rendering a single frame and see what the overhead of each is.  I actually have the compounded issue of dealing with enemy characters that are also dynamically generated, so any impact will be magnified many time over!

If I can’t get adequate performance from each approach, I may be forced to go full 3D and emulating a 2D look.  As I said, it’s one of those things that can have a major impact on your game!  That said, I am pretty confident performance wont be too much of a factor.  My immediate take away task list is:

  • get a LibGDX project set up that can load and render an textured, animated 3D model exported from Blender. I am pretty new to 3D in LibGDX ( okay… completely new to it )
  • bind another model to a bone on this model ( aka, a sword to a hand bone )
  • try to dynamically generate a sprite sheet of the character’s complete animation.  Measure resulting memory usage and time required to generate sheet
  • re-architect the code to instead generate each frame of animation per frame on demand.
  • scale both scenarios up to include multiple instances and see results on performance
  • measure performance impact of each approach on both desktop and mobile.

 

The outcome of this experiment will ultimately decide what approach I take.  Stay tuned if that process sounds interesting!

 

Oh, and of course, these are only the scenarios I could come up with for dealing with dynamically equipped characters, if you have another idea, please share it!

Design, Programming , , ,




Using ZeroBrane Studio with Moai ( it also works with LOVE)

27. September 2012

 

The author contacted me a few weeks up with a heads up that he had a Moai friendly Lua IDE in development and I said I would look into it… and well, I didn’t.  Things came up, then something else and something else and frankly it got pushed into the back of my mind.  In the end, that was a good thing too, as since that point in time the author has added auto-completion… a critical feature.

 

As of writing though, this feature is only available from the github sources.  So long as you have cygwin configured ( assuming you are working with Moai, you should already ), and git installed, the process is fairly simple:

 

cd /cygdrive/c/folder/to/install/to

git clone https://github.com/pkulchenko/ZeroBraneStudio.git

 

This will create a folder ZeroBraneStudio with all the appropriate files. 

EDIT (29/09/2012): Author made update, should no longer need to set permissions described below

There is another catch, git mangles the executable permissions on Windows.  The long answer is well… long, the hackishly easy answer is to run:

chmod a+x –R ZeroBraneStudio

 

There is one last catch, you need to set the MOAI_BIN environment variable to point to your MOAI host folder, either that or add it to your PATH variable.  ( I suggest the former ).  To set the MOAI_BIN path, simply open a command prompt and type:

setx MOAI_BIN c:/path/to/moai/host

 

You can of course set the environment variables using the System control panel, but this will require you to reboot your computer for the change to take effect.  SetX simply requires you close the console to take effect.

 

Now you should be able to run the Studio, just double click zbstudio.exe to start it.  Hopefully the author will have a new release soon, that prevents all of this ( except the MOAI_BIN environment variable that is ).  Finally run ZeroBrane and this is what you see:

 

image

 

 

So then… why?  What does ZeroBraneStudio bring to the table that SublimeText or IntelliJ don’t have?

 

That’s an easy one.  I mentioned the first one earlier:

 

image

 

Auto-completion!  After years of using Visual Studio, this one is a gigantic must for me.  Granted as I covered earlier, IntelliJ can be configured to supported auto completion.  But ZeroBrane Studio has a massive head up on IntelliJ, watch this!

 

image

 

This you cannot do with IntelliJ, and it is a gigantic advantage.  But auto-completion isn’t the only thing ZB brings to the table, ZeroBrane Studio has the ability to debug running code!  You can set breakpoints, step in/over and out, set watches and do dynamic expression evaluations, pretty much most of what you expect to see in a debugger. Oddly enough, this needed a Windows Firewall exception.

 

To debug your code, select a line of text to set a breakpoint then hit F9 to create a breakpoint ( a red stop sign will appear in the margin ).  Then select Project->Start Debugging Server.  Then select Project->Start Debugging or hit F5 to run your code and voila:

image

 

Code execution stops at your breakpoint.  Use F10, Ctrl+F10 and Shift+F10 enable you to navigate through your code. You can hover your mouse over a variable to inspect it’s value.  You can also right-click a variable and create a watch window like this:

image

Unfortunately you can’t drill down into an individual Lua table, but hopefully this feature gets added soon!

 

Perhaps coolest of all, ZeroBrane Studio enables REPL interaction.  You can open the console and interact with your code directly:

image

 

As you can see, you can interact with your values in real time, and here are the results in the output:

image

 

It also has the ability to analyze your code, and show you the plethora of stupid mistakes you are making:

image

 

Not everything is perfect.  As mentioned earlier, watch expressions cant drill down into Lua tables, hopefully something that will change soon.  Additionally, it would be nice if the Moai host could be added using a configuration setting instead of using MOAI_BIN, as it makes it difficult to change between runtimes ( moai, moai-untz, release, debug, etc… ), but this is a minor point.  It would also be nice to be able to control more via context menu’s ( add breakpoint, step over/in, etc ). The code syntax highlighting could be improved a bit as well.  Of course, as a straight code editor, it isn’t going to give IntelliJ or SublimeText a run for it’s money just yet.

 

 

That said, this is an absolutely invaluable tool for people working with Moai or LOVE, the debugger and full auto-completion guarantee that this is going to be a stable part of my toolset going forward.  Hopefully the author is able to add more detailed debugging information soon.  If you are working in Moai or LOVE, you owe it to yourself to check out ZeroBrane Studio.

 

Speaking of which, you can get more information here.  Keep in mind, MOAI auto-completion is currently only available in the source distribution, at least as of version 0.32.  Hopefully the new release is bundled soon.

 

EDIT:

I have spoken with the author, and it appears a few of my issues aren’t issues after all.

The system will check for Moai executable in your PATH environment
variable. You can also set "path.moai = 'd:/lua/moai/moai'" in
cfg/user.lua (see cfg/user-sample.lua for example).

 

EDIT2:

ZeroBrane actually can drill down details into a Lua table, in my relative Lua newbishness, I made a pair of mistakes.  First off, it only works on local variables.  Second, Moai does not return tables, it returns Userdata, which ZeroBrane currently can’t parse ( fingers crossed he figures out a way! ).

 

image

 

Here is the behaviour I was expecting to see, and exactly what I got once I defined the table as a local.

General, Programming , , ,




Autodesk goes indie with Scaleform

10. July 2012

Well, sorta.

 

Autodesk just announced the Scaleform is now available as a Unity3D plugin, or as a standalone mobile200px-Scaleform_logo SDK.  More importantly, it’s available at a price tag of $295 a platform. 

 

Scaleform is used to create UIs for games using the Flash toolset, including ActionScript.  If you’ve played a game in the last year, chances are you’ve seen Scaleform in action.  It powered such titles as Deus Ex: Human Revolution, Civilizations, Warhammer 40K, Skyrim and more.  A partial list is available here.

 

Up until now, buying Scaleform has been a tricky proposition.  Autodesk had no available pricing, so you could only really get it embedded in existing game engines like Unreal, or directly negotiate with Autodesk ( can you say ouch? ). So while 300$ may seem like a look, especially considering Unity starts at 400$, it is still a massive value compared to before.

 

Scaleform for mobile

 

For more details head on over to the Autodesk Gameware website.

News , ,




PlayCanvas. A new 3D game engine for the web

27. June 2012

image

 

OK, I will be the first to admit that there are a few hundred HTML5 3D game engines in development, so why draw any attention to this one?

 

Well, there are a couple very good reasons.

 

First off, star power.  There are pair of fairly successful people behind PlayCanvas.  The CEO is Will Eastcott, former tech director at Activision, who worked on Call of Duty, Grand Theft Auto and Max Payne.  The CTO is Dave Evans, formerly at Sony, and on the team responsible for PlayStation Home.  So, the people behind the company understand real game requirements and have the experience to pull it off.

 

Second, it’s a full tool chain.  This is perhaps the biggest single selling point, it contains a visual designer, importer for 3D Studios Max, Maya and Blender.  Plus of course the libraries you need to create a game, including dynamic lighting, shadow effects, bones/skinned animations, skeletal blending, 3D spatial audio, a component entity system for scripting and more.

 

So basically… they are going head to head with Unity, but focused on HTML5.  They have a demo section, but it is currently a bit underwhelming.

 

At this point, you may be wondering about cost…

During our closed beta, the PlayCanvas development environment will be free and unrestricted for all. When the closed beta ends, the vast majority of users will be able to continue to use PlayCanvas at no cost within some generous limitations. Pro accounts will be available on a subscription basis at a nominal monthly cost. These accounts will relax certain restrictions and unlock additional power-user oriented features in the interface.

 

Currently it is in closed beta, so you will have to contact them for more details.  They are however presenting at Google IO ( which is happening right now ), so perhaps we will have more details soon.

 

If this sounds interesting to you, head on over to playcanvas to learn more.

News ,




Creating a simple YUI Application Framework and Node app

22. June 2012

 

In this tutorial we are now going to implement a web app end to end using YUI and hosted in Node.  It isn’t actually going to do anything much, but it will illustrate everything you need to create a client and server in JavaScript using YUI App and Node.  Over time we will add more bells and whistles, but there is actually everything you need here to create a full functioning web application.

 

After looking in to it a little closer, YUI is incredibly well documented, but actually implementing YUI App Framework in a real world environment there are gaps of information missing.  Generally all you are going to find is samples with a gigantic HTML files where all of the script is located in a single file, which obviously isn’t good form nor very maintainable.  Unfortunately, when you actually set about splitting your YUI App into separate files and templates, you are completely on your own!  In fact, this tutorial may in fact be the first one on the subject on the internet, I certainly couldn’t find one. 

 

That said, there are still some great resources I referred to over and over well figuring this stuff out.  First and foremost, was the App Framework link I posted earlier.  It effectively illustrates using models and views, just not how to organize them across files in a complex project.  The GitHub contributor app was another great sample, but it again, was effectively one giant HTML file.  Finally there is the photosnear.me application’s source code up on GitHub.  It is a full YUI App/NodeJS sample and is certainly worth reading, but as an example for people learning it is frankly a bit too clever.  Plus it renders templates out using Node, something I wanted to avoid.

 

Alright, lets do a quick overview of how program flow works, don’t worry, it’s actually a lot less complicated than it looks. It is initially over-engineered for what it ends up accomplishing.  However, in the end you basically have the bones of everything you need to create a larger more complex application and a code structure that will scale with your complexity.

 

image

This ( in the image to the left ) is the file hierarchy that we are about to create.  In our root directory are two files, server.js which is our primary NodeJS application, while index.html is the heart of our web application, and where the YUI object is created.

 

Additionally, in a folder named scripts we create a pair of directories models and views.  Models are essentially your applications data, while Views are used to display your data.  Finally within the views folder we have another folder templates which is where our handlebar templates reside.  If you have done any PHP or ASP coding, templates are probably familiar to you already.  Essentially they are used to dynamically insert data into HTML, it will make more sense shortly.

 

We are going to implement one model, person.js, which stores simple information about a Person, and one view person.View.js which is responsible for displaying the Person’s information in the browser, and does so using the person.Template.

 

Now lets actually take a look at how it all works, in the order it is executed.

 

First we need our Node based server, that is going to serve the HTML to the browser ( and do much much more in the future ).  Create a new file named server.js

var express = require('express'), server = express.createServer(); server.use('/scripts', express.static(__dirname + '/scripts')); server.get('/', function (req, res) { res.sendfile('index.html'); }); server.get('*', function (req, res) { res.redirect('/#' + req.url, 302); }); server.listen(process.env.PORT || 3000);

 

Essentially we are creating an express powered server.  The server.use() call enables our server to serve static ( non-dynamic ) files that are located in the /scripts folder and below.  This is where we serve all of our javascript and template files from, if we didn’t add this call, we would either need to manually map each file or you will get a 404 when you attempt to access one of these files on server.  Next set our server up to handle two particular requests.  If you request the root of the website ( / ), we return our index.html file, otherwise we redirect back all other requests back to the root with the url appended after a hash tag.  For more details, read this, although truth is we wont really make much use of it.  Finally we start our server to listen on port 3000 ( or process.env.PORT if hosted ). Amazingly enough, these 9 lines of code provide a full functioning if somewhat basic web server.  At this point, you can open a browser and browse to http://localhost:3000, well, that is, once you start your server.

 

Starting the server is as simple as running node server.js from your command line.  This assumes you have installed NodeJS and added it’s directory to your PATH environment variable, something I highly recommend you do.  Now that we have our working server, lets go about creating our root webpage index.html.

<!DOCTYPE html> <html> <head> <title>GameFromScratch example YUI Framework/NodeJS application</title> </head> <body> <script src="http://yui.yahooapis.com/3.5.1/build/yui/yui-min.js"></script> <script src="/scripts/models/person.js"></script> <script src="/scripts/views/person.View.js"></script> <script> YUI().use('app','personModel','personView', function (Y) { var app = new Y.App({ views: { personView: {type: 'PersonView'} } }); app.route('/', function () { var person = new Y.Person(); this.showView('personView',{model:person}); }); app.render().dispatch(); }); </script> </body> </html>

 

The most important line here is the yui seed call, where we pulling in the yui-min.js, at this point we have access to the YUI libraries.  Next we link in our model and view, we will see shortly.  Ideally you would move these to a separate config file at some point in the future as you add more and more scripts.  These three lines cause all of our javascripts to be included in the project.

 

The YUI().use call is the unique way YUI works, you pass in what parts of YUI library you want to access, and it creates an object with *JUST* that stuff included, in the for of the Y object.  In this case, we want the YUI App class ( and only it! ) from the YUI framework, as well as our two classes personModel and personView, which we will see in a bit more detail shortly.  If you use additional YUI functionality, you need to add them in the use() call.

 

We create our app and configure it to have a single view named personView of type PersonView.  Then we set up our first ( and only route ), for dealing with the URL /.  As you add more functionality you will add more routes.  In the event a user requests the web root, we create a person model.  Next we show the personView and pass it the person model we just created.  This is how you connect data and views together using the YUI app framework.  We then render our app and call dispatch(), which causes our app url to be routed ( which ultimately causes our person model and view to be created. If you aren’t used to Javascript and are used to programming languages that run top down, this might seem a bit alien to you at first.  Don’t worry, you get used to it eventually… mostly).

 

Now lets take a look at our model person.js

YUI.add('personModel',function(Y){ Y.Person = Y.Base.create('person', Y.Model, [],{ getName:function(){ return this.get('name'); } },{ ATTRS:{ name: { value: 'Mike' }, height: { value: 6 }, age: { value:35 } } } ); }, '0.0.1', { requires: ['model']});

 

Remember in index.html in the YUI.use() call where we specified personModel and personView, this is how we made those classes accessible.  By calling YUI.add() we add our class into the YUI namespace, so you can use YUI.use() to included them when needed, like we did in index.html.

 

Next we create our new class, by deriving from Y.Model using Y.Base.create(),  you can find more details here.  We declare a single function getName(), then a series of three attributes, name, height and age.  We set our version level to ‘0.0.1’ chosen completely at random.  When inside a YUI.add() call, we specify our YUI libraries as a array named requires instead of in the YUI.use call.  Otherwise, it works the same as a .use() call, creating a customized Y object consisting of just the classes you need.

 

Now lets take a look at the view, person.View.js

 

YUI.add('personView',function(Y){ Y.PersonView = Y.Base.create('personView', Y.View, [], { initializer:function(){ var that=this, request = Y.io('/scripts/views/templates/person.Template',{ on:{ complete:function(id,response){ var template = Y.Handlebars.compile(response.responseText); that.get('container').setHTML(template(that.get('model').getAttrs(['name','age','height']))); } } }); }, render:function(){ return this; } }); }, '0.0.1', { requires: ['view','io-base','personModel','handlebars']});

 

Like person.js, we use YUI.add() to add personView to YUI for availability elsewhere.  Again we used Y.Base.create(), this time to extend a Y.View.  The rest that follows is all pretty horrifically hacky, but sadly I couldn’t find a better way to do things that way I want. The first horrible hack is that:this, which is simply taking a copy of PersonView’s this pointer, as later during the callback, this will actually represent something completely different.  The next hack was dealing with including Handlebar templates, something no sites that I could findon the web illustrate, because they are using a single HTML file (which makes the task of including a template trivial).

 

The problem is, I wanted to load in a Handlebars template( we will see it in a moment ) in the client  and there are a few existing options, none of which I wanted to deal with.  One option is to create your template programmatically using JavaScript, which seemed even more hacky ( and IMHO, beats the entire point of templating in the first place! ).  You can also precompile your templates, which I will probably do later, but during development this just seemed like an annoyance. The photosnear.me site includes them on the server side using Node, something I wanted to avoid ( it’s a more complex process over all, and doesn’t lend itself well to a tutorial ).  So in the end, I loaded them using Y.io. Y.io allows you to make asynchronous networking requests, which we use to read in our template file person.Template.   Y.io provides a series of callbacks, of which we implement the complete function, read the results as our template, “compile” it using Y.Handlebars, we then “run” the template using template(), passing it the data it will populate itself with.  In our case, our name, age and height attributes from our personModel.  template() after executing contains our fully populated html, which we set to our views container using the setHTML() method.

 

Finally, lets take a look at person.Template, our simple Handlebars template:

<div align=right> <img src="http://www.gamefromscratch.com/image.axd?picture=HTML-5-RPG_thumb_1.png" alt="GameFromScratch HTML5 RPG logo" /> </div> <p><hr /></p> <div> <h2>About {{name}}:</h2> <ul> <li>{{name}} is {{height}} feet tall and {{age}} years of age.</li> </ul> </div>

 

As you can see, Handlebar templates are pretty much just straight HTML files, with small variations to support templating.  As you can see, the values {{name}}, {{height}} and {{age}} are the values that are populated with data.  They will look at the data passed in during the template() call and attempt to find matching values.  This is a very basic example of what Handlebars can do, you can find more details here.

 

Now, if you haven’t done so, run your server using the command node server.js, if you have set node in your PATH.

 

Then, open a web browser and navigate to http://localhost:3000, and if all went well you should see:

 

image

 

 

Granted, not very exciting application, but what you are seeing here is a fully working client/server application with a model, view and templating .  There is one thing that I should point out at this point… in the traditional sense, this isn’t really an MVC application, there is no C(ontroller), or to a certain extent, you could look at the template as the view, and the view as the controller!  But don’t do that, it’s really quite confusing! Smile  Just know, we have accomplished the same goals, our data layer is reusable and testable, our view is disconnected from the logic and data.  Don’t worry, the benefits of all of this work will become clear as we progress, and certainly, once we start adding more complexity.

 

In the near future, we will turn it into a bit more of an application.

 

 

You can download the project source code here.

Programming , , , , ,




PlayStation Mobile SDK Development tutorial content

PlayStation Mobile SDK Development tutorial content

image

 

 

The following is a complete list of the PS Suite tutorials and how-to documents on GameFromScratch.com.  Additional PlayStation Suite related posts may be available under the PSSDK tag. 

 

These tutorials cover a beta product, so changes are going to occur.  Read this guide to cover any changes that may have happened since these tutorials were written. So far, the changes are quite minor.

 

 

A look around the PlayStation Mobile tools and SDK

Walks you through what is in the PS SDK, where everything is located and what each thing does.  It also contains a C# class library infographic. If you’ve just downloaded the SDK and are trying to figure things out, start here.

 

PlayStation Mobile SDK Tutorial 1: Hello World

Shows you how to use PS Studio to create, configure and run a new project as well as creating an app.cfg file.  Demonstrates programming the code to create a “Hello World” sprite and display it full screen on your display. Introduces the GameEngine2D library.

 

PlayStation Mobile SDK Tutorial 2: Hello World2, Hello Harder!

This post builds on Tutorial 1 and illustrates the various ways to programmatically access the gamepad.  It adds the ability to scale and rotate your hello world sprite using the gamepad.  Shows how to manually handle the GameEngine2D game loop.

 

Debugging PlayStation Mobile Applications on your Vita device

Walks through the process of debugging on an actual PlayStation Vita device, including both on the PS Vita itself as well as in PS Studio.

 

Exporting from Blender to PS Mobile: Blender to Vita in under 5 minutes.

This tutorial, part one of two, covers creating a model in Blender, UV mapping, texturing and exporting it in COLLADA format for use in PlayStation Suite.

 

Exporting from Blender to PS Mobile: Part 2

This tutorial, part two of two, covers converting the model into a Vita compatible format, importing your model into PlayStation Suite as well as code to display your model in 3D.

 

All things audio

This tutorial covers playing sound effects as well as background music.  It also demonstrates using a UI created in UI Composer.

 

Working with sprite sheets in PS Mobile

This tutorial covers creating an animation sprite sheet, and the code required to use it in PlayStation suite, resulting in a simple walk cycle.

 

Handling updates in GameEngine2D in PlayStation Mobile

This tutorial demonstrates the various ways to handling updates using GameEngine2D. Covers Schedule(), overriding Update() as well as creating an ActionBase class.

 

UI Composer and Networking Part 1

This two part tutorial will cover creating a networking GUI client that accesses a high score server over the network.  This part handles creating the GUI and the code to display it.

 

UI Composer and Networking Part 2

Part two of the UI/Networking tutorial, this one covers wiring up the UI to actually do something, as well as the networking calls. Also covers creating/link projects and using JSON.

 

Adding Physics with the FarSeer library (*)

This isn’t actually a tutorial like the rest so far, but is still relevant.  This “not-tutorial” shows you how to add the FarSeer physics engine.  It doesn’t *actually* cover using the Far Seer library though, expect that in a later tutorial.

 

Speeding your PS Mobile sprite rendering with SpriteList

This tutorial covers using SpriteList to greatly speed up Sprite rendering.  It also shows how to create a GUI dynamically.

 

Pinching, Tapping, Dragging and Flicking…

This tutorial covers pinching, tapping, dragging and flicking an ImageBox all around the screen.  Covers working with all of the Gesture detectors built into the SDK.

 

Creating Scene Transitions

This tutorial covers using transition effects between scenes.  It shows how to replace scenes, then add a bit of graphical flare with a fade and cross fade effect.

 

Messing with a GameEngine2D sprite’s UV coordinates

This tutorial illustrates how to manipulate a SpriteUV’s UV coordinates, to translate, crop or rotate a sprite’s texture.  Also a small primer on UV coordinates if the topic is new to you.

 

A complete game programmed with PlayStation Mobile Studio

In this multi-part tutorial, we create a complete physics based Pong clone, putting together all of what you have learned from earlier tutorials and a little bit more.  Showing how to develop a complete game using the PSM SDK.





blog comments powered by Disqus