Subscribe to GameFromScratch on YouTube Support GameFromScratch on Patreon

27. July 2012

 

One area of Blender that is often over looked, or people are downright unaware of, is its video editing capabilities. Hidden behind the 3D functionality we all know and love, under a layer of yes… sometimes confusing UI, lives a remarkably capable NLE ( non-linear editor ).

 

In this tutorial I am going to look at some of the simples and most common features Blender offers when it comes to video editing. We are going to look at a couple extremely common tasks, and how you would accomplish them using Blender. Without further ado, let's jump right in.

 

Setting Blender up for video editing

 

The first step is to fire up Blender. I personally am using Blender 2.63, but any version after 2.5x should follow more or less the same steps. NLE functionality was also available in 2.4, but the UI has changed extensively.

 

Once in Blender, click the Choose Screen Layout button and select Video Editing in the dropdown, like this:

 

 

Your Blender should change to the following layout:

 

The NLA editor sequence area is where we add all of our video elements, such as video, audio and images. In the top right it is in preview mode ( they are both the same window types ), showing a preview of the video we are creating.

 

The timeline view is where your timeline controls are, as well as controls for setting keys, video duration etc.

 

The graph editor is for fine tune control using f curves and will not be used much in this tutorial.

 

Alright, we now are in editing mode, let's start with something simple.

 

Adding a movie title screen

 

 

As is pretty common, we want to have a static "Title Screen" of sorts. In our case it is just a simple png image we are going to display for 5 seconds. We are recording at 30FPS, so that means 150 frames.

 

First we need to add an image. I created a 1920x1080 PNG image in GIMP for this. This is because I am working in 1080p output. Use whatever you wish.

 

In the NLA Editor sequence window, locate the menu and choose Add Image:

 

Locate the file you want as a title screen. In the left hand side, locate start frame and set it to 0, and end frame and set it to 150:

 

Finally click Add Image Strip. The image will now appear on Channel 1 of the NLA sequence, like such:

 

 

Let's take a look at the key parts that make up the sequence window.

 

The left hand side, the vertical axis represents the different channels composing your video. In the crudest sense you can think of these like layers in a graphics program ( with obvious exceptions, such as the fact they can overlap, and contain sound ).

 

Across the bottom, the horizontal axis represents elapsed time.

 

Then in this case, the purple object represents your image. Different components have different colours as we will see. The length of this bar represents the duration that the image is visible in the timeline.

 

You can navigate this window using many familiar Blender hotkeys. The most important for now is G)rab to move the active item around. The middle mouse button pans around, while the scroll wheel zooms in and out. While the "." Key zooms to fit the selection/sequence. With only a single channel, these two features aren't really all that useful yet.

 

So, that's the sequence view, but you should also notice at the top right corner, the other NLA window is showing an active preview, like so:

 

As you move from frame to frame, this area will update.

 

Alright, so we now have a title screen that displays for 5 seconds, lets add some video:

 

Adding a video

 

 

In the NLA Sequence view, click Add, then Movie.

 

Just like before, select the movie file you want to add. Set the start frame to 151, and set channel as 2.

 

Now you will see the following results in the sequence:

 

As you can see, the video starts right where the image ends. Depending on the video file you uploaded, you may have gotten two channels like I have… what is going on here? Well, the blue strip represents the video portion of the video file we added, while the green portion represents the audio track. In this case we want to keep the audio. If you didn't, getting rid of it is as simple are right clicking it, then hitting X to delete. There is one potential problem though, the audio isn't synced by default. Let's correct that right away.

 

In the Timeline window, select Playback, and make sure AV-sync is selected.

 

Now audio and video will be synchronized.

 

At this point, we have one other issue… Our newly added video may be longer than our timeline.

 

Right click your video in the NLA Sequence window and take a look in the properties to the right ( hit N if the properties Window isn't visible. ). Locate the Length:

 

This added another 567 frames to our total length, for a total of 717 frames. We need to update our video length. In the timeline, locate End, click it in the middle and update to 717:

 

Now you can go ahead and preview your handy work up until this point. Just to the right of the end frame you just specified, are a set of VCR like controls:

 

The field to the left ( currently valued 191 ) represents the current frame, while the buttons are for controlling playback. The preview window at the top right should update as you got from frame to frame.

 

Now we have a title screen and a movie with audio, synced and playing back. Let's look at one of the next most common tasks…

 

Adding a watermark

 

 

Adding a watermark or signature to a video is one of the most common tasks you need to perform when editing a video, and fortunately it is remarkably simple. In fact, it works exactly the same as when we added the title screen. All we are doing is adding a mostly transparent image over top of our scene for the entire duration of the film. Create a 1080p image in whatever image editor you prefer, just make sure everywhere except your watermark is transparent. I used this image:

Nothing really exciting. Just like when we added the title screen, in the NLA Editor window, select Add->Image, select your file. On the left hand side we want it to start at Frame 0, go until Frame 717 and be on channel 4, like so:

 

If you look in your preview window, it probably just went all black except the watermark, like so:

This is because the image is obscuring the strips below. Don't worry, this is easily fixed.

 

In the NLA Editor sequence, make sure the watermark image is selected by right clicking it, then in the properties window ( hit N if not visible ), select the Blend drop down and choose Alpha Over, like so:

 

Then voila, in your preview Window everything is back to normal, just now with a watermark in the bottom right corner!

 

This process thus far assumed that your video was perfect and continuous, something that is rarely true. So now we look at…

 

 

Editing a video clip

 

 

Just like back in the days of physical film, cutting up and reordering video is a remarkably common task. Fortunately it is also quite easy. We are now going to split our image in half, and put an interruption in the between the splice.

 

First thing you want to do is select the point where you want to make the cut. You can do this using the vcr style controls, by left clicking anywhere in the timeline, left clicking anywhere on the VLA Editor sequence, or directly entering a frame in the current frame box. The green line indicates the currently selected frame:

Once you have your video where you want to perform the cut, hold down SHIFT and right click both the video and audio portions of the video strip ( the blue and emerald bars ). With both selected, hit 'K' to perform a hard cut. At this point, you have essentially "cut" the film and audio tracks into two separate entities.

 

Let's put a pause of 60 frames ( 2 seconds ) in between our cut. SHIFT select the audio and video strip on the right, press G to move them and move them 60 frames to the right, like so:

 

Now let's insert an image to be displayed during the gap. Using the same method we did before, add an image. In my case I want it to start at frame 231, end at frame 291 and be added on channel 4, like such:

 

Now if you play the video you will have a 2 second pause in both video and audio, while your image is displayed.

 

We are getting to the end, now let's add a simple effect. Let's fade our video to black over the final 160 frames.

 

Adding a video effect

 

 

We now want to select 100 frames from the end of our video. The easiest way to do this is probably in the timeline, although it is still only going up to 250! To resize the timeline to show our entire timeline either select View->View All, or hit the Home key. You will see the timeline now goes all the way up to 720, now left click around the 620 mark. This will move the current frame to 620, in the timeline, the VLA Editor window and in the current frame box:

 

Right click the video strip ( the blue one ), then choose Add->Effect->Color

 

In the properties window, make sure the resulting effect starts at 620 and has duration of 100. We then want to set the opacity to 0 ( making it completely transparent ), set Blend to Alpha Over like this:

 

Now in order, SHIFT Click first the video strip, then the color strip we just add, then select Add->Effect Strip->Gamma Cross. This will add another effect the results in the color getting slowly drawn over the movie. In the sequence it will appear like:

 

We now have a ( rather crappy ) fad to black to end our film.

 

Finally, lets fade out our audio file. In the timeline, move to around the 560 frame mark. In the properties window ( N if not on screen ), scroll down and hover over the volume button, like so:

 

Hit the "I" key. This will set a keyframe. The keyframed value ( volume ) will turn yellow:

 

Now we want to advance to the end of our sequence (frame 720 for me), and this time change the value to 0, and set another keyframe. Like so:

 

Now if you look in the graph editor window ( which has been collecting dust until now ), you can see a curve representing the audio falloff we just set:

You can control the rate the audio drops off by manipulating this curve. We however are going to leave it as it is.

 



And sometimes… you encounter a bug. After cutting the audio strip, I was unable to keyframe the second portion of the audio strip. It gives the error:

Could not insert keyframe, as RNA Path is invalid for the given ID (ID = SCScene, Path = sequence_editor.sequences_all["20120723_221045.002"].volume)

It is annoying, but easily fixed. Right click select the strip causing the problem, then select the menu Strip->Duplicate Strip. Now delete the original ( X ), and move your duplicate in it's place. Keyframing should now work again.

 



And we are now complete with the editing, let's go ahead and render it. This is where the Blender UI kind of falls on it's face, as this process is nowhere near as intuitive as it should be!

 

Rendering your video to file

 

 

First thing, we need to open up the Properties view. We can either open up a new Window, or repurpose an existing one. Since we aren't really using the graph editor, I am going to repurpose it. Click the window type icon and select Properties, like so:

 

First thing we want to do is set up our dimensions. Since we are outputting at 1080p, lets start with that default. Drop down the dimensions and pick HDTV 1080p.

 

In my case though, my source video ( and desired output ) is actually 30FPS, not 24. So lets change that next. Drop down FrameRate and select 30:

 

Now scroll down to the Output section, choose your output location and output format:

I am going with H.264 for this example.

 

Now scroll down and expand the encoding section. This part will be entirely dependent on your computer, the codecs you have installed as well as your desired results. I ultimately will be uploading to Youtube or Vimeo, who will both re-encode anyways, so I am going to encode at a fairly high level. The Bitrate is what will determine file size vs quality. Also by default it will not encode audio, so make sure you select an audio codec if you want your audio encoded. If you select an audio codec that isn't compatible with your video codec, it will give you an error when you try to render. Here are the settings I've used.

 

 

Now we are finally ready to output our video file. Scroll back up and click the Animation button:

 

And now Blender will churn off for a period of time creating your movie. If you are running Windows 7, a progress bar will update over the Blender application icon. You can press ESC to cancel the render at any time.

 

 

The Results

 

 

And here are the (rather lame) fruits of our labour!

 

Blender edited video results

Art , ,

26. July 2012

Unreal have just release the July update of the popular Unreal Engine.

New features include:

  • Perforce version control integration
  • Normal map workflow improvements
  • Numerous Unreal Editor refinements including
  • Nvidia Open Automate integration

 

Mobile improvements including

  • DrawTimer for indicating busy status
  • Better save game encryption
  • Better ipod background music support
  • SMS and Mail dialog support
  • Better memory management on lower end devices

 

Over all, if you aren't all that excited about Perforce or Open Automate, it's a pretty unremarkable release.

 

You can download it here and read the complete release notes here. You can read about a rabbit with a pancake on its head here.

News

25. July 2012

 

This post is a table of contents of sorts for the recently completed series documenting the creation of a simple game (web application) for my daughter.  Although the game is quite simple, the application itself covers quite a bit.  There are a ton of tutorials on the internet about the various individual pieces I use, but very few illustrating putting them all together to create a complete app.

 

In this tutorial, we cover:

 

Part 1 -- Node and cocos2D

Setting up a NodeJS server, that is able to serve both static and dynamic content using express.  By the end of this part we are successfully hosting a cocos2D application.

 

Part 2 -- Deploying to Heroku

This part covers deploying your Node application into the cloud, using Heroku’s completely free tier.  This part is optional, you can run your application anywhere you want so long as Node is supported.

 

Part 3 – The guts and plumbing

This part is the heart of the application itself.  It illustrates how to upload and serve data from a Node server.  The upload portion is managed using the YUI framework from Yahoo.  This is how you could make a more traditional web application using JavaScript and illustrates.

 

Part 4 – The Game

This part creates the actual “game” if you can call it that.  It illustrates creating a simple cocos2D HTML game that interacts with the NodeJS server side.

 

Part 5 – Adding a Database to the mix

Losing all your data every time Node restarts or Heroku feels like erasing them gets old quick, so I added a database to the mix.  In this case I used the CouchDB NoSQL database, hosted on IrisCouch using the Nano library.

 

Part 6 – Phonegap?

Ok, this part is actually TBD.  I am in the process of porting to PhoneGap, to bundle this application as a native phone application.  Will update here if it was successful with another tutorial post

 

 

 

The Results

 

You can see the application running here

 

It is pretty simple over all.  Choose a pair of images using the dropdowns at the top.  Then you can click to cycle through the various images.  Additionally, you can click the settings button, which will bring you to the settings app we created in Part 3.  Here you can upload new images and manage existing one.   Warning, anyone can upload images, so I take no responsibility for what they might contain!  Anyone can also delete images at any times, so if it is empty or your images disappear, this is probably why.

 

Finally, I have pushed the complete source tree up to GitHub, which is available here.

General , , , ,

25. July 2012

In prior parts we setup a Node.js/Express server to serve a cocos2D HTML project, showed how to host that project in the cloud using Heroku, then we created the backbone of the application itself, providing the ability to populate the app with data, as well as a means to retrieve that data.

 

Now, it’s time to actually create the game itself.

 

As I mentioned at the very beginning, the game is remarkably simple and isn’t really a game at all.  It is a simple series of two configurable pictures, it shows both side by side, then if you click it one zooms in, then if you click again, the next image zooms in, then finally, both images are shown side by side again.  The end result is I can show my daughter a sequence of events so she can better understand cause and effect.

 

Here is a screenshot of our ultimate end result:

image

 

Not earth shattering by any means, but it a) accomplishes what I need to accomplish and can be hosted on any device and accessed anywhere I need it  b) demonstrates all of the core technologies needed to make a much more complex web hosted game or web application.

 

 

Lets jump right in and look at the code.  The graphics are powered by cocos2D HTML, a 2D JavaScript based game library.  You can read some tutorials about it here, as I am not going to get into how cocos2D works in any detail.

 

First we need our AppDelegate.js class, which fires up our cocos2D game.

var cc = cc || {};

cc.AppDelegate = cc.Application.extend({
    ctor:function () {
        this._super();
    },
    initInstance:function () {
        return true;
    },
    applicationDidFinishLaunching:function () {
        var pDirector = cc.Director.sharedDirector();

        var size = pDirector.getWinSize();
        pDirector.setAnimationInterval(1.0 / 60);
        var pScene = FirstThis.scene();
        pDirector.runWithScene(pScene);
        return true;
    },
    applicationDidEnterBackground:function () {
        cc.Director.sharedDirector().pause();
    },
    applicationWillEnterForeground:function () {
        cc.Director.sharedDirector().resume();
    }
});

 

For more information on what is happening here, check the tutorial link I posted earlier.  The most important part to us is the creation of FirstThis, which is the heart of our application.

 

Let’s take a look at FirstThis.js.  Again, I apologize for the wonky formatting, it was done to fit the blog.

 

var FirstThis = cc.LayerColor.extend({
    leftSprite:null,
    rightSprite:null,
    mode:0,
    imageChanged:function(imgName,whichSprite){
        if(this.mode != 0) // We were in transition when select box was changed!
        {
            this.resetVisibility();
            this.mode=0;
        }
        this.removeAllChildrenWithCleanup(true);

        if(this.leftSprite != null && whichSprite=="right")
        {
            this.addChild(this.leftSprite);
        }
        if(this.rightSprite != null && whichSprite=="left"){
            this.addChild(this.rightSprite);
        }

        var imageSize;
        YUI().use('node','io-base',function(Y){
            var results = Y.io("/imageSize/" + imgName, {"sync":true});
            imageSize = JSON.parse(results.responseText);
        });

        var newSpriteWidth = cc.Director.sharedDirector().getWinSize().width/2;
        var newSpriteHeight = cc.Director.sharedDirector().getWinSize().height/2;

        if(whichSprite == "left"){
            this.leftSprite = cc.Sprite.create("/image/" + imgName,
                new cc.Rect(0,0,imageSize.width,imageSize.height));
            this.addChild(this.leftSprite);
            this.leftSprite.setScale(
                (newSpriteWidth * this.leftSprite.getScaleX())/imageSize.width);
            this.leftSprite.setAnchorPoint(new cc.Point(0,1));
            this.leftSprite.setPosition(
                new cc.Point(0,cc.Director.sharedDirector().getWinSize().height));
        }
        else
        {
            this.rightSprite = cc.Sprite.create("/image/" + imgName,
                new cc.Rect(0,0,imageSize.width,imageSize.height));
            this.addChild(this.rightSprite);
            this.rightSprite.setScale(
                (newSpriteWidth * this.rightSprite.getScaleX())/imageSize.width);
            this.rightSprite.setAnchorPoint(new cc.Point(0,1));
            this.rightSprite.setPosition(
            new cc.Point(newSpriteWidth,cc.Director.sharedDirector().getWinSize().height));
        }
    },
    resetVisibility:function()
    {
        this.leftSprite.setIsVisible(true);
        this.rightSprite.setIsVisible(true);
        this.leftSprite.setPosition(
            new cc.Point(0,cc.Director.sharedDirector().getWinSize().height));
        this.rightSprite.setPosition(
                new cc.Point(cc.Director.sharedDirector().getWinSize().width/2,
                cc.Director.sharedDirector().getWinSize().height));
    },
    ctor:function()
    {
        this._super();
    },
    init:function()
    {
        this.setIsTouchEnabled(true);
        this.initWithColor(cc.ccc4(0,0,0,255));

        var that = this;

        YUI().use('node',function(Y){
            Y.one("#firstSel").on("change",function(event){
                if(event.currentTarget.get("selectedIndex") == 0) return;
                    that.imageChanged(event.currentTarget.get("value"),"left");
            });
            Y.one("#thenSel").on("change",function(event){
                if(event.currentTarget.get("selectedIndex") == 0) return;
                    that.imageChanged(event.currentTarget.get("value"),"right");
            });
        });
        this.setAnchorPoint(0,0);
        return this;
    },
    ccTouchesEnded:function (pTouch,pEvent){
        if(this.leftSprite != null && this.rightSprite != null ){
            this.mode++;
            if(this.mode == 1)
            {
                this.leftSprite.setIsVisible(true);
                this.rightSprite.setIsVisible(false);
                this.leftSprite.setPosition(
                        new cc.Point(cc.Director.sharedDirector().getWinSize().width/4,
                        cc.Director.sharedDirector().getWinSize().height));
            }
            else if(this.mode == 2)
            {
                this.leftSprite.setIsVisible(false);
                this.rightSprite.setIsVisible(true);
                this.rightSprite.setPosition(
                    new cc.Point(cc.Director.sharedDirector().getWinSize().width/4,
                        cc.Director.sharedDirector().getWinSize().height));
            }
            else{
                this.resetVisibility();
                this.mode = 0;
            }
        }

    }
});


FirstThis.scene = function() {
    var scene = cc.Scene.create();
    var layer = FirstThis.layer();

    scene.addChild(layer);
    return scene;
}

FirstThis.layer = function() {
    var pRet = new FirstThis();

    if(pRet && pRet.init()){
        return pRet;
    }
    return null;
}

 

Again, most of what is going on here is covered in the earlier tutorial, but I will give a quick overview, and point out some of the application specific oddities as I go.

 

Starting from the top, we declare a pair of variables for holding our two active sprites, as well as a value for the current “mode”, which essentially represents our click state, which will make sense shortly.

 

We then define a method imageChanged which will be called when the user selects a different value in one of the two select boxes at the top of the screen.  On change we first chack to see if the mode is not 0, which means that we are in the process of showing images to the user ( meaning a single image might be visible right now ), in which case we reset visibility and positioning so our two images are side by side again, and reset the mode back to zero.  Then we remove all of the sprites from the layer, effectively erasing the screen.

 

Next we want to check if the user is updating the left or the right image.  If the user is updating the right image for example, we check to see if the left image has been assigned a value yet, and if it has assign it back to the scene.  This test is to prevent trying to push a null sprite onto the scene if the user hasn’t selected images with both drop-downs yet.  We do this for the left and right image.

 

Next we run into a bit of a snag.  This application wants to size images so they each take up 50% of the screen.  There is a problem with this however, as when you declare a sprite with cocos2D, unless you specify the image size, it doesn’t have any dimension data!  This is ok with a game when you will probably know the image dimensions in advance, but in this situation where the images are completely dynamic, it presents a major problem!  This is a problem we solved rather nicely on the server side using node.  We will look at the changes to server.js shortly, but for now realize we add a web service call that returns the specified image dimensions.  We then make a synchronous call using YUI’s io() module to retrieve that information… note the complete lack of error handling here!  My bad.

 

We ideally want to scale our image so their width is half the screen.  We now create our sprite, depending if it is on the left or right side, but the logic is basically identical.  First we create the sprite using the filename passed as a value from our select box, and the dimensions we fetched earlier.  We then add that file to the scene, scale it so the width is 50% of the screen ( scaling it up or down as required ), then position it relative to the top left corner of the sprite, with the X value changing if the sprite is on the left or right.

 

Next up we create the function resetVisibility() which we used earlier on.  Basically it just makes both sprites visible again and puts them back in their default positions.  Next we implement a simple constructor that calls layers constructor.  This is to verify some methods needed to handle touch input are properly called. The cocos2D tutorial on handling input covers this in a bit more detail.

 

Next up is our initialization function, named appropriately enough init().  We tell cocos2D that we are going to handle touch (click) events, that we want to create a layer with a black opaque background.  Next we wire up event handlers to our two select boxes that call our imageChanged() method when, um, and image is changed.  Lastly we tell our layer to anchor using the default bottom left corner.  This call is rather superfluous, as this is the default. I like to include it for peace of mind though, as if there is something I fight with ( and hate! ) about cocos, it’s the coordinate system.

 

Next up we have our touch handler ccTouchesEnded, which will be called when the user taps the screen or when they click a mouse.  This is where the mode variable comes into play.  At a value of 0, it means the screen hasn’t been touched yet, so on first touch, we set the left image to the center of the screen and the right image as invisible.  On the next touch, we do the opposite, then on any further touches we set both images back to their default positions and reset mode back to zero.  The remaining code is simple boiler plate setup code used to create our layer, which again is covered in more detail in these tutorials.

 

In a nutshell, that is our game’s code.  You may remember earlier we made a change to server.js, let’s take a look at the file now.

 

Here is server.js in it’s entirety.  Remember, you do not need to host on Heroku to run this app.  Simple run node server.js from the command line or terminal, then hit localhost:3000 in your web browser.

 

server.js

var express = require('express'),
    server = express.createServer(),
    im = require('imagemagick'),
    files = {};

server.use('/cocos2d', express.static(__dirname + '/cocos2d') );
server.use('/cocosDenshion', express.static(__dirname + '/cocosDenshion') );
server.use('/classes', express.static(__dirname + '/classes') );
server.use('/resources', express.static(__dirname + '/resources') );

server.use(express.bodyParser());

server.get('/', function(req,res){
    res.sendfile('index.html');
    console.log('Sent index.html');
});

server.get('/settings',function(req,res){
   res.sendfile('settings.html');
   console.log('Send settings.html');
});

// API calls
server.get('/image/:name', function(req,res){
    if(files[req.params.name])
    {
        res.contentType(files[req.params.name].contentType);
        res.sendfile(files[req.params.name].path);

        console.log("Returning file" + req.params.name);
    }
});

server.get('/imageSize/:name',function(req,res){
   im.identify(files[req.params.name].path,function(err,features){
       console.log("image/" + req.params.name);
       if(err) throw err;
       else
        res.json({ "width":features.width, "height":features.height });
   });
});

server.get('/getPhotos', function(req,res){
    res.json(files);

});

server.get('/clearAll', function(req,res){
    files = {};
    res.statusCode = 200;
    res.send("");
})

server.post('/upload',function(req,res){
    files[req.files.Filedata.name] = {
        "name":req.files.Filedata.name,
        "path":req.files.Filedata.path,
        "size":req.files.Filedata.size,
        "contentType":req.files.Filedata.type,
        "description":req.body.description };
        console.log(req.files.Filedata);

    console.log(Object.keys(files).length);
    res.statusCode = 200;
    res.send("");
});
server.listen(process.env.PORT || 3000);

Most of this code we have seen before, but there are a couple of new changes.  First thing to notice is:

    im = require('imagemagick'),

We added a new library to the mix to handle image process, the venerable ImageMagick.  You need to install this library before running this code.  First we need to add it to node, that is as simple as, from a command line, cd to your project directory then type:

npm install imagemagick

If you are deploying to Heroku, you also need to update the dependencies in the package.json file, so Heroku is aware of the dependencies.  That file should now look like:

{
    "name": "firstthis",
    "version": "0.0.1",
    "dependencies": {
        "express": "2.5.x",
        "imagemagick":"0.1.x"
    },
    "engines": {
        "node": "0.8.x",
        "npm":  "1.1.x"
    }
}

Finally, you need to install ImageMagick itself.  On Windows the easiest way is to download the binary installer, while for Linux it’s probably easiest to use a package manager of your choice.  Once you install imagemagick, be sure to start a new command line/terminal so it picks up the path variables image magick sets.

 

Ok, now that we have the dependency out of the way, the code itself is trivial:

server.get('/imageSize/:name',function(req,res){
   im.identify(files[req.params.name].path,function(err,features){
       console.log("image/" + req.params.name);
       if(err) throw err;
       else
        res.json({ "width":features.width, "height":features.height });
   });
});

We get file details about the image passed in to the url ( for example with the URL /imageSize/imagename.jpg, name will = imagename.jpg ), then if no errors occur, we return the width and height as a JSON response.

 

 

Finally, we get to the actual HTML, index.html which is served by Node if you request the “/” of the website.

<html>
<head>
 <script src="http://yui.yahooapis.com/3.5.1/build/yui/yui-min.js"></script>
 <script>
  YUI().use('node','io-base',function(Y){

    Y.on("load", function(){
     var canvas = Y.DOM.byId('gameCanvas');
     canvas.setAttribute("width",window.innerWidth-30);
     canvas.setAttribute("height", window.innerHeight-70);
     Y.Get.script(['/classes/cocos2d.js']);

    });

    Y.io('/getPhotos',{
     on: {
      complete:function(id,response){
       var files = JSON.parse(response.responseText);
       var firstSel = Y.DOM.byId("firstSel");
       var thenSel = Y.DOM.byId("thenSel");

       for(var key in files)
       {
           firstSel.options.add(
             new Option(files[key].description,files[key].name));
           thenSel.options.add(
             new Option(files[key].description,files[key].name));
       }
      }
     }
    });

   });
 </script>
</head>
<body style="padding:0; margin: 0; background: black">
 <form>
  <span style="align:left;vertical-align:top;padding-top:0px">
   <label style="color:white;height:40px;font-size:26;vertical-align:middle">
       First:
   </label>
   <select style="height:40px;font-size:22;width:250" id="firstSel">
       <option selected>Drop down to choose</option>
   </select>
   <label style="color:white;height:40px;font-size:26;
   padding-left:10px;vertical-align: middle;">Then:</label>
  <select style="height:40px;font-size:22;width:250" id="thenSel">
    <option>Drop down to choose</option>
   </select>
  </span>
  <span style="float:right;vertical-align: top;margin-top:0px;top:0px;">
   <input align=right type=button value=settings id=settings
       style="height:40px;font-size:26"
      onclick="document.location.href='/settings';"  />
  </span>
 </form>
<div width=100% style="text-align:center;clear:both;">
    <canvas id="gameCanvas">
        Your browser does not support the canvas tag
    </canvas>
</div>
</body>
</html>

Everything here we have seen before, except perhaps this small chunk of extremely important code:

    Y.on("load", function(){
     var canvas = Y.DOM.byId('gameCanvas');
     canvas.setAttribute("width",window.innerWidth-30);
     canvas.setAttribute("height", window.innerHeight-70);
     Y.Get.script(['/classes/cocos2d.js']);

    });

 

This code will be executed once the page finishes loading.  As you may notice, the canvas tag gameCanvas did not have a size specified, as we want it to take the full width of the device it is run on.  Unfortunately you cannot just say width=100% and be done with it, so we set the width and height programmatically when the page loads.  Finally, to make sure that cocos2D objects aren’t loaded and set until after the Canvas tag is resized, I deferred the loading until now, so after we resize the canvas, we dynamically load the cocos2d script, making sure it initializes with the proper dimensions.

 

So, after all our hard work, here is our running completed application in action.  ( Or direct link here, if you don’t want to see it in an iframe ).  Again a warning the data for this application is completely viewer driven, I give no guarantees the content is appropriate! Please be civil ).

 

 

That basically completes the application tutorial.  There are a few issues right now:

1- It doesn’t persist user data.  If the server restarts or a certain period of time elapses, all the data stored on Heroku is lost.  This is easily fixed, but beyond the scope of this tutorial

2- There is a complete lack of hardening or error handling, so the application is probably incredibly fragile

3- There are a few HTML bugs ( oh… aren’t there always? ).  The change event doesn’t always fire when you select the first drop down the first time ( this is a browser bug, and can be worked around, but the code isn’t tutorial friendly. ).  Also, on Safari mobile, cocos2D sprite scaling doesn’t appear to work.

 

 

I hope you found this series useful.  As I was working with git anyways for the Heroku deployment, I decided to make this entire project available on github.  So if you want to fork it and play around, have fun!  I am a massive github newbie though, so don’t be shocked if I screwed something up.  If you, like me, have a child that isn’t doing transitions very well, I hope you find it useful.  I am actually making a more production worth version of this application, so if you need an online first-then board, drop me a line!

 

On a final note, I am going to look into making a PhoneGap version of this application.  If that exercise bares fruit, I will have another section for this tutorial!

Programming , , , ,

24. July 2012

For those of you that are regular readers of this site, you have probably noticed I am a big fan of Safari Books Online, a subscription based book service.  They released a iOS application a year or so back and it was … lacking.  Android subscribers though were left completely in the dark.  There was ( and is ) a mobile version of the site http://m.safaribooksonline.com, but unfortunately it did a horrible job with code examples, which was a pretty big deal.  So, having the mobile app on Android is very nice.

 

It was only just released and I’ve had a small amount of experience with it, and I will say it is vastly improved.  I just wish they would cache books ahead a bit more, as if you flip more than 2 pages you get an annoying “Loading content” display.  The big feature though with the Safari to Go app is the ability to take up to 3 books offline with you.  I have yet to try this feature, but it may remove the annoying loading messages!

 

All told though, it looks like a solid release.

 

I did encounter one oddity though, I couldn’t find it on Google Play.  I had to go the website here, login and then download it.

 

So, if you are a Safari subscriber, be sure to check it out.  If you aren’t, it really is worth looking in to.  If you cannot find it on the store on your device, be sure to try the direct link above.

General

Month List

Popular Comments

It Came From YouTube–Week 1
Subscribe to GameFromScratch on YouTube Support GameFromScratch on Patreon


19. December 2015

 

You may have noticed this year that GameFromScratch was increasingly active creating videos on YouTube.  Inititally it was pretty much a 1 to 1 relationship with GameFromScratch.com.  That is, for every video on YouTube, there was a corresponding post here on GFS.  Recently however I have found some topics are more video friendly or more text friendly and that 1 to 1 relationship doesn’t always exist.  Therefore I’ve decided to launch this weekly recap series which simply brings together the last weeks YouTube videos in a single place.

IMG_1826

 

 

This week saw the launch of a new video series, Bad GameDev!  No Cookie!  Which looks at game development mistakes in actual games.  So far there are two videos in the series. 

The first series looks at the bad 3rd person camera in the on-rails iOS shooter Freeblade.  Unfortunately Camtasia picked up the wrong mic for voice over in this video, so the audio quality is horrid.  Sorry about that.

Bad GameDev! No Cookie! Game Design Mistakes: Freeblade

 

Next in the series we looked at Space Marine and show the folly of a bad FoV.

Bad GameDev! No Cookie! Game Design Mistakes: Space Marine

 

We also had 3 additions to the GameDev Toolbox Series, an ongoing video series showcasing the tools of the game development trade:

Texture Packer

Sculptris

Tiled Map Editor

 

We also took a quick look at the AirPlay/Google Cast desktop server, Reflector2

Reflector2

 

And of course the recap of last weeks game development news.

Week 4 News Recap

General

blog comments powered by Disqus

Month List

Popular Comments