Subscribe to GameFromScratch on YouTube Support GameFromScratch on Patreon

7. October 2014

 

 

In this part of the Cocos2d-x tutorial series we are going to take a look at what’s involved in handling keyboard events.  If you went through the mouse/touch tutorial, a lot of this is going to seem very familiar, as the process is quite similar.  That said, keyboard handling does have it’s own special set of problems to deal with.

Let's jump straight in to an example. Once again I assume you already know how create your own AppDelegate, if you can't, I suggest you jump back to this part first.

 

Handling Keyboard Events

 

Our first example is simply going to respond to WASD and Arrow keys to move the Cocos2d-x logo around the screen.  In this example I made no special modifications to a standard scene, so the header is unchanged from previous tutorials.

 

KeyboardScene.cpp

#include "KeyboardScene.h"

USING_NS_CC;

Scene* KeyboardScene::createScene()
{
    auto scene = Scene::create();
    
    auto layer = KeyboardScene::create();
    scene->addChild(layer);
    return scene;
}

bool KeyboardScene::init()
{
    if ( !Layer::init() )
    {
        return false;
    }
    
    auto sprite = Sprite::create("HelloWorld.png");
    sprite->setPosition(this->getContentSize().width/2, this->getContentSize().height/2);

    this->addChild(sprite, 0);

    auto eventListener = EventListenerKeyboard::create();



    eventListener->onKeyPressed = [](EventKeyboard::KeyCode keyCode, Event* event){

        Vec2 loc = event->getCurrentTarget()->getPosition();
        switch(keyCode){
            case EventKeyboard::KeyCode::KEY_LEFT_ARROW:
            case EventKeyboard::KeyCode::KEY_A:
                event->getCurrentTarget()->setPosition(--loc.x,loc.y);
                break;
            case EventKeyboard::KeyCode::KEY_RIGHT_ARROW:
            case EventKeyboard::KeyCode::KEY_D:
                event->getCurrentTarget()->setPosition(++loc.x,loc.y);
                break;
            case EventKeyboard::KeyCode::KEY_UP_ARROW:
            case EventKeyboard::KeyCode::KEY_W:
                event->getCurrentTarget()->setPosition(loc.x,++loc.y);
                break;
            case EventKeyboard::KeyCode::KEY_DOWN_ARROW:
            case EventKeyboard::KeyCode::KEY_S:
                event->getCurrentTarget()->setPosition(loc.x,--loc.y);
                break;
        }
    };

    this->_eventDispatcher->addEventListenerWithSceneGraphPriority(eventListener,sprite);

    return true;
}

 

When run, you see the logo centered and can move it around using either WASD or arrow keys.

KeyboardSS

The code works almost identically to our earlier Touch examples.  You create an EventListener, in this case a EventListenerKeyboard, implement the onKeyPressed event handler.  The first paramater passed in is the EventKeyboard::KeyCode enum, which is a value representing the key that was pressed.  The second value was the Event target, in this case our sprite.  We use the Event pointer to get the target Node and update it’s position in a direction depending on which key is pressed.  Finally we wire up our scene’s _eventDispatcher to receive events.  Nothing really unexpected here.

 

Polling the Keyboard

 

You may however ask yourself… what If I want to poll for keyboard events?  For example, what if you wanted to check to see if the spacebar was pressed at any given time?

 

Short answer is, you can’t.  Cocos2d-x is entirely event driven.

 

Long answer however is, it’s relatively easy to roll your own solution, so let’s do that now.  I’ll jump right in with the code and discuss it after.

 

KeyboardScene.h

#pragma once

#include "cocos2d.h"
#include <map>


class KeyboardScene : public cocos2d::Layer
{
public:

    static cocos2d::Scene* createScene();
    virtual bool init();

    bool isKeyPressed(cocos2d::EventKeyboard::KeyCode);
    double keyPressedDuration(cocos2d::EventKeyboard::KeyCode);

    CREATE_FUNC(KeyboardScene);

private:
    static std::map<cocos2d::EventKeyboard::KeyCode,
        std::chrono::high_resolution_clock::time_point> keys;
    cocos2d::Label * label;
public:
    virtual void update(float delta) override;
};

 

KeyboardScene.cpp

#include "KeyboardScene.h"

USING_NS_CC;

Scene* KeyboardScene::createScene()
{
    auto scene = Scene::create();
    
    KeyboardScene* layer = KeyboardScene::create();
    scene->addChild(layer);
    return scene;
}

bool KeyboardScene::init()
{
    if ( !Layer::init() )
    {
        return false;
    }

    label = cocos2d::Label::createWithSystemFont("Press the CTRL Key","Arial",32);
    label->setPosition(this->getBoundingBox().getMidX(),this->getBoundingBox().getMidY());
    addChild(label);
    auto eventListener = EventListenerKeyboard::create();



    Director::getInstance()->getOpenGLView()->setIMEKeyboardState(true);
    eventListener->onKeyPressed = [=](EventKeyboard::KeyCode keyCode, Event* event){
        // If a key already exists, do nothing as it will already have a time stamp
        // Otherwise, set's the timestamp to now
        if(keys.find(keyCode) == keys.end()){
            keys[keyCode] = std::chrono::high_resolution_clock::now();
        }
    };
    eventListener->onKeyReleased = [=](EventKeyboard::KeyCode keyCode, Event* event){
        // remove the key.  std::map.erase() doesn't care if the key doesnt exist
        keys.erase(keyCode);
    };

    this->_eventDispatcher->addEventListenerWithSceneGraphPriority(eventListener,this);

    // Let cocos know we have an update function to be called.
    // No worries, ill cover this in more detail later on
    this->scheduleUpdate();
    return true;
}

bool KeyboardScene::isKeyPressed(EventKeyboard::KeyCode code) {
    // Check if the key is currently pressed by seeing it it's in the std::map keys
    // In retrospect, keys is a terrible name for a key/value paried datatype isnt it?
    if(keys.find(code) != keys.end())
        return true;
    return false;
}

double KeyboardScene::keyPressedDuration(EventKeyboard::KeyCode code) {
    if(!isKeyPressed(EventKeyboard::KeyCode::KEY_CTRL))
        return 0;  // Not pressed, so no duration obviously

    // Return the amount of time that has elapsed between now and when the user
    // first started holding down the key in milliseconds
    // Obviously the start time is the value we hold in our std::map keys
    return std::chrono::duration_cast<std::chrono::milliseconds>
            (std::chrono::high_resolution_clock::now() - keys[code]).count();
}

void KeyboardScene::update(float delta) {
    // Register an update function that checks to see if the CTRL key is pressed
    // and if it is displays how long, otherwise tell the user to press it
    Node::update(delta);
    if(isKeyPressed(EventKeyboard::KeyCode::KEY_CTRL)) {
        std::stringstream ss;
        ss << "Control key has been pressed for " << 
            keyPressedDuration(EventKeyboard::KeyCode::KEY_CTRL) << " ms";
        label->setString(ss.str().c_str());
    }
    else
        label->setString("Press the CTRL Key");
}
// Because cocos2d-x requres createScene to be static, we need to make other non-pointer members static
std::map<cocos2d::EventKeyboard::KeyCode,
        std::chrono::high_resolution_clock::time_point> KeyboardScene::keys;

 

And when you run it:

ControlKey

 

So, what are we doing here?  Well essentially we record key events as they come in.  We have two events to work with, onKeyPressed and onKeyReleased.  When a key is pressed, we store it in a std::map, using the KeyCode as the key and the current time as the value.  When the key is released, we remove the released key from the map.  Therefore at any given time, we know which keys are pressed and for how long.  In this particular example, in the update() function ( ignore that for now, I’ll get into it later! ) we poll to see if the Control key is pressed.  If it is, we find out for how long and display a string.

 

So, even though polling isn’t built in to Cocos2d-x, it is relatively easy to add.

 

Dealing with Keyboards on Mobile Devices

 

So, what about keyboards on mobile devices?  All Android phones and iOS devices are able to display a Soft Keyboard ( the onscreen keyboard ), can we use it?  The answer is… sort of.

 

What's about physical keyboards on mobile devices?


You may be wondering, how does a physical keyboard on a mobile device work with Cocos2d-x? In the case of an iPad, the answer is, it doesn't. When I hooked up a Bluetooth Keyboard, absolutely nothing happened. The same occurred when I paired the keyboard to my Android phone. However, I do not have an Android device with a physical keyboard, such as the Asus Transformer, but my gut says it wouldn't work either. At least, not with you doing a lot of legwork that is

 

Sort of isn't really a great answer so I will go into a bit more detail.  Yes you can use the soft keyboard, but in a very limited manner.  Basically you can use it for text entry only.  Truth is though, this should be enough, as controlling a game using a soft keyboard would be a horrid experience.

 

Let’s take a look at an example using TextFieldTTF and implementing an TextFieldDelegate:

 

KeyTabletScene.h

#pragma once
#include "cocos2d.h"

class KeyTabletScene : public cocos2d::Layer, public cocos2d::TextFieldDelegate
{
public:
    virtual ~KeyTabletScene();

    virtual bool onTextFieldAttachWithIME(cocos2d::TextFieldTTF *sender) override;

    virtual bool onTextFieldDetachWithIME(cocos2d::TextFieldTTF *sender) override;

    virtual bool onTextFieldInsertText(cocos2d::TextFieldTTF *sender, const char *text, size_t nLen) override;

    virtual bool onTextFieldDeleteBackward(cocos2d::TextFieldTTF *sender, const char *delText, size_t nLen) 
override; virtual bool onVisit(cocos2d::TextFieldTTF *sender, cocos2d::Renderer *renderer, cocos2d::Mat4 const &transform, uint32_t flags) override; static cocos2d::Scene* createScene(); virtual bool init(); CREATE_FUNC(KeyTabletScene); };

 

KeyTabletScene.cpp

#include "KeyTabletScene.h"

USING_NS_CC;

Scene* KeyTabletScene::createScene()
{
    auto scene = Scene::create();
    
    auto layer = KeyTabletScene::create();
    scene->addChild(layer);

    return scene;
}

bool KeyTabletScene::init()
{
    if ( !Layer::init() )
    {
        return false;
    }


    // Create a text field
    TextFieldTTF* textField = cocos2d::TextFieldTTF::textFieldWithPlaceHolder("Click here to type",
            cocos2d::Size(400,200),TextHAlignment::LEFT , "Arial", 42.0);
    textField->setPosition(this->getBoundingBox().getMidX(),
            this->getBoundingBox().getMaxY() - 20);
    textField->setColorSpaceHolder(Color3B::GREEN);
    textField->setDelegate(this);

    this->addChild(textField);

    // Add a touch handler to our textfield that will show a keyboard when touched
    auto touchListener = EventListenerTouchOneByOne::create();

    touchListener->onTouchBegan = [](cocos2d::Touch* touch, cocos2d::Event * event) -> bool {
        try {
            // Show the on screen keyboard
            auto textField = dynamic_cast<TextFieldTTF *>(event->getCurrentTarget());
            textField->attachWithIME();
            return true;
        }
        catch(std::bad_cast & err){
            return true;
        }
    };

    this->_eventDispatcher->addEventListenerWithSceneGraphPriority(touchListener, textField);

    return true;
}

KeyTabletScene::~KeyTabletScene() {

}

bool KeyTabletScene::onTextFieldAttachWithIME(TextFieldTTF *sender) {
    return TextFieldDelegate::onTextFieldAttachWithIME(sender);
}

bool KeyTabletScene::onTextFieldDetachWithIME(TextFieldTTF *sender) {
    return TextFieldDelegate::onTextFieldDetachWithIME(sender);
}

bool KeyTabletScene::onTextFieldInsertText(TextFieldTTF *sender, const char *text, size_t nLen) {
    return TextFieldDelegate::onTextFieldInsertText(sender, text, nLen);
}

bool KeyTabletScene::onTextFieldDeleteBackward(TextFieldTTF *sender, const char *delText, size_t nLen) {
    return TextFieldDelegate::onTextFieldDeleteBackward(sender, delText, nLen);
}

bool KeyTabletScene::onVisit(TextFieldTTF *sender, Renderer *renderer, const Mat4 &transform, uint32_t flags) {
    return TextFieldDelegate::onVisit(sender, renderer, transform, flags);
}

 

And when you run it:

TabletKeyboardShot

 

Essentially when the user touches the screen, we display the onscreen keyboard with a call to attachWithIME(), the rest is handled by the textfield.

 

I have a sneaking feeling this method is going to be depreciated at some point in the future, being replaced by cocos::ui classes, but for now it works just fine.  For the record, it is actually possible to force up the onScreen keyboard by calling Director::getInstance()->getOpenGLView()->setIMEKeyboardState(true), but it seemingly pushes your scene to the background, so isn’t a viable option for controlling a game.  I was going to look into a work around but then thought, really… this is a downright stupid thing to do.  Doing anything other than text entry with a soft keyboard is just a bad idea.

 

 

Programming , ,

6. October 2014

 

So today I upgraded to Xcode 6.1 beta because I felt like having an afternoon of frustration.  After upgrading my OS version, purging my old copy of Xcode and installing the newest version, I load up an existing project and:

[REDACTED]: no identity found

Command /usr/bin/codesign failed with exit code 1

Ah, crap.

 

After struggling for a bit, I found there is an easy, but completely unintuitive fix.  Simply go into the Xcode Menu then Preferences

S1

 

For your account, select View Details...

S2

 

Hit the refresh icon.

S3

If prompted to do so, let it download the missing profiles.  

 

Restart Xcode.

 

Oh yeah, and do a Build->Clean.

 

Headache hopefully gone.

6. October 2014

 

 

In today’s tutorial we are going to cover the simple but powerful concept of grouping in Phaser.  As the name suggested, grouping allows you to group like minded Sprites together.  Probably easiest to jump right in with a simple example:

 

/// <reference path="phaser.d.ts"/>
class SimpleGame {
    game: Phaser.Game;
    sprite: Phaser.Sprite;
    group: Phaser.Group;
    
    constructor() {
        this.game = new Phaser.Game(640, 480, Phaser.AUTO, 'content', {
            create: this.create, preload:
            this.preload, render: this.render
        });
    }
    preload() {
        this.game.load.image("decepticon", "decepticon.png");
        
    }
    render() {

    }
    create() {
        this.group = this.game.add.group();
        this.group.create(0, 0, "decepticon");
        this.group.create(100, 100, "decepticon");
        this.group.create(200, 200, "decepticon");

        this.game.add.tween(this.group).to({ x: 250 }, 2000,
            Phaser.Easing.Linear.None, true, 0, 1000, true).start();
    }
}

window.onload = () => {
    var game = new SimpleGame();
};

 

And run it:

 

 

As you can see, all objects in the group are updated as the group is updated.  You may notice we created the sprites directly in the group using Group.create, but we didn’t have to.  We could have just as easily done:

 

    create() {
        this.group = this.game.add.group();
        var sprite1 = this.game.add.sprite(0, 0, "decepticon");
        var sprite2 = this.game.add.sprite(100, 100, "decepticon");
        var sprite3 = this.game.add.sprite(200, 200, "decepticon");
        this.group.add(sprite1);
        this.group.add(sprite2);
        this.group.add(sprite3);

        this.game.add.tween(this.group).to({ x: 250 }, 2000,
            Phaser.Easing.Linear.None, true, 0, 1000, true).start();
    }

 

You can also add groups to groups, like so:

 

    create() {
        this.group = this.game.add.group();
        this.group2 = this.game.add.group();

        this.group.create(0, 0, "decepticon");

        this.group2.create(100, 100, "decepticon");
        this.group2.create(200, 200, "decepticon");

        this.group.add(this.group2);

        this.game.add.tween(this.group).to({ x: 250 }, 2000,
            Phaser.Easing.Linear.None, true, 0, 1000, true).start();
    }

 

The above code performs identically to the earlier example.  This can provide a great way to organize your game into logical entities, such as a group for the background, a group for scrolling foreground clouds, a group for bullets, etc.

 

Grouping things together is all nice and good, but if you can’t do anything to the group, it’s mostly just pointless.  Fortunately then, there is quite a bit you can do with a group.  You can loop through them:

 

    create() {
        this.group = this.game.add.group();

        this.group.create(0, 0, "decepticon");
        this.group.create(100, 100, "decepticon");
        this.group.create(200, 200, "decepticon");

        // Set each item in the group's x value to 0
        this.group.forEach((entity) => {
            entity.x = 0;
        }, this, false);

        this.game.add.tween(this.group).to({ x: 250 }, 2000,
            Phaser.Easing.Linear.None, true, 0, 1000, true).start();
    }

 

You can sort them:

 

    create() {
        this.group = this.game.add.group();

        this.group.create(0, 0, "decepticon");
        this.group.create(100, 100, "decepticon");
        this.group.create(200, 200, "decepticon");

        // Sort group by y coordinate descending
        this.group.sort("y", Phaser.Group.SORT_DESCENDING); 
        this.game.add.tween(this.group).to({ x: 250 }, 2000,
            Phaser.Easing.Linear.None, true, 0, 1000, true).start();
    }

 

You can update a property on all group members at once:

 

    create() {
        this.group = this.game.add.group();

        this.group.create(0, 0, "decepticon");
        this.group.create(100, 100, "decepticon");
        this.group.create(200, 200, "decepticon");

        // set the alpha value of all sprites to 50%
        this.group.setAll("alpha", 0.5);

        this.game.add.tween(this.group).to({ x: 250 }, 2000,
            Phaser.Easing.Linear.None, true, 0, 1000, true).start();
    }

 

Running:

 

 

You can get the index of any item within the group:

 

    create() {
        this.group = this.game.add.group();

        var sprite1 = this.group.create(0, 0, "decepticon");
        this.group.create(100, 100, "decepticon");
        this.group.create(200, 200, "decepticon");
        this.group.sort("y", Phaser.Group.SORT_DESCENDING);

        var index = this.group.getIndex(sprite1);
        this.game.add.text(0, 0, "Sprite1's index is:" + index,
            { font: "65px Arial", fill: "#ff0000", align: "center" },
            this.group); // Index should be 2

        this.game.add.tween(this.group).to({ x: 250 }, 2000,
            Phaser.Easing.Linear.None, true, 0, 1000, true).start();
    }

 

Running:

 

 

And as you might be able to see from the above example, you can also add text objects directly to groups!

One other important concept of Groups is being dead or alive.  There are all kinds of methods for checking if an entity is alive or not, like so:

 

    create() {
        this.group = this.game.add.group();

        var sprite1 = this.group.create(0, 0, "decepticon");
        this.group.create(100, 100, "decepticon");
        this.group.create(200, 200, "decepticon");


        sprite1.alive = false;
        this.group.forEachDead((entity) => {
            entity.visible = false;
        }, this);

        this.game.add.tween(this.group).to({ x: 250 }, 2000,
            Phaser.Easing.Linear.None, true, 0, 1000, true).start();
    }

 

This kills off the first of the three sprites, leaving you:

 

 

This tutorial only scratched the surface on what groups can do.  Simply put, they are a convenient data type for logically holding your game objects.  The only thing I didn’t mention is what happens when a group doesn’t have a parent.  In this situation, that parent is the games’ World, which… is a group.

 

Programming , , ,

3. October 2014

 

 

In this part of the Cocos2d-x tutorial series we are going to look at how to handle touch and mouse events .  First you should be aware that by default Cocos2d-x treats a mouse left click as a touch, so if you only have simple input requirements and don’t require multi-touch support ( which is remarkably different to perform with a single mouse! ), you can simply implement just touch handlers.  This part is going to be code heavy, as we actually have 3 different tasks to cover here ( touch, multi-touch and mouse ), although all are very similar in overall behavior.

 

Let’s jump in with an ultra simple example.  Once again, I assume you’ve done the earlier tutorial parts and already have an AppDelegate.

 

Handle Touch/Click Events

 

TouchScene.h

#pragma once

#include "cocos2d.h"

class TouchScene : public cocos2d::Layer
{
public:
    static cocos2d::Scene* createScene();
    virtual bool init();  

    virtual bool onTouchBegan(cocos2d::Touch*, cocos2d::Event*);
    virtual void onTouchEnded(cocos2d::Touch*, cocos2d::Event*);
    virtual void onTouchMoved(cocos2d::Touch*, cocos2d::Event*);
    virtual void onTouchCancelled(cocos2d::Touch*, cocos2d::Event*);
    CREATE_FUNC(TouchScene);

private:
   cocos2d::Label* labelTouchInfo;
};

TouchScene.cpp

 

#include "TouchScene.h"

USING_NS_CC;

Scene* TouchScene::createScene()
{
    auto scene = Scene::create();
    auto layer = TouchScene::create();
    scene->addChild(layer);

   return scene;
}

bool TouchScene::init()
{
    if ( !Layer::init() )
    {
        return false;
    }
    
   labelTouchInfo = Label::createWithSystemFont("Touch or clicksomewhere to begin", "Arial", 30);

   labelTouchInfo->setPosition(Vec2(
      Director::getInstance()->getVisibleSize().width / 2,
      Director::getInstance()->getVisibleSize().height / 2));

   auto touchListener = EventListenerTouchOneByOne::create();

   touchListener->onTouchBegan = CC_CALLBACK_2(TouchScene::onTouchBegan, this);
   touchListener->onTouchEnded = CC_CALLBACK_2(TouchScene::onTouchEnded, this);
   touchListener->onTouchMoved = CC_CALLBACK_2(TouchScene::onTouchMoved, this);
   touchListener->onTouchCancelled = CC_CALLBACK_2(TouchScene::onTouchCancelled, this);

   _eventDispatcher->addEventListenerWithSceneGraphPriority(touchListener, this);
    
   this->addChild(labelTouchInfo);
   return true;
}

bool TouchScene::onTouchBegan(Touch* touch, Event* event)
{
   labelTouchInfo->setPosition(touch->getLocation());
   labelTouchInfo->setString("You Touched Here");
   return true;
}

void TouchScene::onTouchEnded(Touch* touch, Event* event)
{
   cocos2d::log("touch ended");
}

void TouchScene::onTouchMoved(Touch* touch, Event* event)
{
   cocos2d::log("touch moved");
}

void TouchScene::onTouchCancelled(Touch* touch, Event* event)
{
   cocos2d::log("touch cancelled");
}

 

Then if you run it, when you perform a touch or click:

image

 

As you can see, where you touch on the screen a text label is displayed.  Looking in the background of that screenshot you can see touch moved events are constantly being fired and logged.  Additionally touch ended events are fired when the user removes their finger ( or releases the mouse button ).

 

Now let’s take a quick look at the code.  Our header file is pretty straight forward.  In addition to the normal methods, we add a quartet of handler functions for handling the various possible touch events.  We also add a member variable for our Label used to draw the text on the screen.

 

In the cpp file, we create the scene like normal.  In init() we create an EventListener of type EventListenerTouchOneByOne, which predictably handles touches, um, one by one ( as opposed to all at once, which we will see later ).  We then map each possible event, touch began, touch end, touch cancelled and touch moved, to their corresponding function handler using the macro CC_CALLBACK_2, passing  the function to execute and the context ( or target ).  This too will make sense later, so hold on there.  One thing to watch out for here, and one point of confusion for me, onTouchBegan has a different signature than every other event, returning a bool.  I am not entirely certain why this one event is handled differently, seems like a bad idea to me personally, but there may be a good design reason I am unaware of.

 

The last thing we do is register our EventListener to receive events.  This is done with a call to Node’s protected member _eventListener.  We call addEventListenerWithSceneGraphPriority(), which basically means we want this event to be updated as much as possible.  We will see an example of setting a different priority level later on.

 

What's this CC_CALLBACK_2 black magic?


I'm generally not a big fan of macro usage in C++. I generally believe they lead programmers to eventually turn their libraries into meta-programming languages and ultimately obfuscate the underlying code in the name of clarity. This however, is one of the exceptions to the rule. CC_CALLBACK_2, and the entire CC_CALLBACK_ family is simply a wrapper around some standard C++ code, specifically a call to std::bind. Here is the actual macro code:

#define CC_CALLBACK_0(__selector__,__target__, ...) std::bind(&__selector__,__target__, ##__VA_ARGS__)
#define CC_CALLBACK_1(__selector__,__target__, ...) std::bind(&__selector__,__target__, 
std::placeholders::_1, ##__VA_ARGS__)
#define CC_CALLBACK_2(__selector__,__target__, ...) std::bind(&__selector__,__target__, 
std::placeholders::_1, std::placeholders::_2, ##__VA_ARGS__)
#define CC_CALLBACK_3(__selector__,__target__, ...) std::bind(&__selector__,__target__, 
std::placeholders::_1, std::placeholders::_2, std::placeholders::_3, ##__VA_ARGS__)

Basically std::bind is binding for binding parameters to a function. The std::placeholders are ultimately the number of parameters your function expects. So for example, when you call CC_CALLBACK_2, you are saying that function takes two parameters, in this case a Touch* point and an Event* pointer. Similarly CC_CALLBACK_1 would expect the provided function to take a single parameter. This kind of code is incredibly common in C++11, it's incredibly ugly, hard to read and grok and it's easy to mistype. In these cases, macro use shines. Just be aware of what it is the macro you are calling does. Each time you encounter a macro in code, I recommend you right click and "Go to Definition" or CTRL+Click if in XCode, to see what it actually does, even if it doesn't make complete sense.

 

 

In most of the touch handlers, we simply log that the event occurred.  In the event of a touch starting ( or click beginning ) we update the position of the label to where the user clicked and display the string “You Touched Here”.

 

Now let’s take a look at an example that uses lambda’s instead.  This example also goes into a bit more detail of what’s in that Touch pointer we are being passed.  The header file is basically the same, except there are no onTouch____ functions.

 

Handling Touch Events using Lambdas and dealing with Touch coordinates

 

TouchScene.cpp

#include "TouchScene.h"

USING_NS_CC;

Scene* TouchScene::createScene()
{
    auto scene = Scene::create();
    auto layer = TouchScene::create();
    scene->addChild(layer);

    return scene;
}

bool TouchScene::init()
{
    if ( !Layer::init() )
    {
        return false;
    }
    
   auto sprite = Sprite::create("HelloWorld.png");
   sprite->setPosition(Vec2(Director::getInstance()->getVisibleSize().width / 2,
      Director::getInstance()->getVisibleSize().height / 2));

    // Add a "touch" event listener to our sprite
   auto touchListener = EventListenerTouchOneByOne::create();
   touchListener->onTouchBegan = [](Touch* touch, Event* event) -> bool {

      auto bounds = event->getCurrentTarget()->getBoundingBox();

      if (bounds.containsPoint(touch->getLocation())){
         std::stringstream touchDetails;
         touchDetails << "Touched at OpenGL coordinates: " << 
            touch->getLocation().x << "," << touch->getLocation().y << std::endl <<
            "Touched at UI coordinate: " << 
            touch->getLocationInView().x << "," << touch->getLocationInView().y << std::endl <<
            "Touched at local coordinate:" <<
            event->getCurrentTarget()->convertToNodeSpace(touch->getLocation()).x << "," <<  
            event->getCurrentTarget()->convertToNodeSpace(touch->getLocation()).y << std::endl <<
            "Touch moved by:" << touch->getDelta().x << "," << touch->getDelta().y;

            MessageBox(touchDetails.str().c_str(), "Touched");
         }
      return true;
      };

   Director::getInstance()->getEventDispatcher()->addEventListenerWithSceneGraphPriority(touchListener,sprite);
   this->addChild(sprite, 0);
    
    return true;
}

 

Now when you run it:

image

 

In this example, the touch event will only fire if the user clicked on the Sprite in the scene.  Notice the first line in the onTouchBegan handler I call event->getCurrentTarget()?  This is where the context becomes important.  In the line:

Director::getInstance()->getEventDispatcher()->addEventListenerWithSceneGraphPriority(touchListener,sprite);

The second parameter, sprite, is what determines the target of the Event.  The target is passed as a Node but can be cast if required.

 

Lambda?


Lambda's are a new feature of C++ and they are probably something you will love or hate. If you come from C++ or C# you will probably find them long over due, I certainly do!
Lambda is a scary sounding expression coming from the scary looking symbol Λ. In the world of mathematics, Lambda calculus basically gives math the ability to define functions, something we as programmers can certainly appreciate. In the world of programming, it's nowhere near as scary, a lamdba expression can also be thought of as an anonymous function. In simple terms, it allows you to create a nameless function where you need it. As you can see from this example, it allows you to put event handling logic where it makes most sense, instead of spliting it out into a seperate function. It is also a godsend when you want to pass a function as a parameter, a very common task in the C++ standard libraries.
The syntax of C++ lambda's is pretty ugly, but they are certainly a valuable addition to the language. Most importantly, they can often make your code easier to express and as such, easier to comprehend and maintain. Learn to love the lamdba and the lambda will learn to love you. Maybe.

 

In this example, we use the target node to only handle clicks that happen within the bounds of our Sprite Node.  This is done by testing if the touch location is within the bounding box of the node.  If it is, we display a number of details in a message box.  Remember back in this tutorial part where I said there are multiple coordinate systems, this is a perfect example.  As you can see from the message box above, getLocation() and getLocationInView() return different values, one relative to the top left corner of the screen, while the other is relative to the bottom left corner of the screen. 

 

Sometimes as well you want to know where the click occurred relative to the node.  Such as in the sample above, the local coordinate is the position the click occurred relative to the node’s origin.    In order to calculate this location we use the helper function convertToNodeSpace().  One final thing you may notice is I registered the EventListener with Director() instead of _eventListener.  This was the old way of doing things and I did it this way for a couple reasons.  First, to show that you can.  Second, because _eventListener is a protected member, I would only have access to it if I derived my own Sprite object.

 

Now let’s take a look at a multi-touch example.

 

Dealing with Multi-touch

 

Multi-touch works pretty much the same way, just with a separate set of event handlers.  There are a few catches however.  The big one is iOS.  Out of the box, Android just works.  iOS however requires you to make a small code change to enable multitouch support.  Don’t worry, it’s a simple process. 

 

In your project, locate the directory /proj.ios_mac/ios and open the file AppController.mm.  Then add the following line:

AppControllerMM

 

Simply add the line [eaglView setMultipleTouchEnabled:YES]; somewhere after the creation of eaglView.  Now multitouch should work in your iOS application, let’s look at some code:

 

MultiTouchScene.h

#pragma once

#include "cocos2d.h"

class MultiTouch : public cocos2d::Layer
{

    public:
        static cocos2d::Scene* createScene();

        virtual bool init();
        CREATE_FUNC(MultiTouch);
    private:
        const static int MAX_TOUCHES = 5;

    protected:
        cocos2d::Label* labelTouchLocations[MAX_TOUCHES];

};

 

MultiTouchScene.cpp

#include "MultiTouchScene.h"

USING_NS_CC;

Scene* MultiTouch::createScene()
{
    auto scene = Scene::create();
    auto layer = MultiTouch::create();
    scene->addChild(layer);

    return scene;
}

bool MultiTouch::init()
{
    if ( !Layer::init() )
    {
        return false;
    }

    // Create an array of Labels to display touch locations and add them to this node, defaulted to invisible
    for(int i= 0; i < MAX_TOUCHES; ++i) {
        labelTouchLocations[i] = Label::createWithSystemFont("", "Arial", 42);
        labelTouchLocations[i]->setVisible(false);
        this->addChild(labelTouchLocations[i]);
    }

    auto eventListener = EventListenerTouchAllAtOnce::create();

    //  Create an eventListener to handle multiple touches, using a lambda, cause baby, it's C++11
    eventListener->onTouchesBegan = [=](const std::vector<Touch*>&touches, Event* event){

        // Clear all visible touches just in case there are less fingers touching than last time
        std::for_each(labelTouchLocations,labelTouchLocations+MAX_TOUCHES,[](Label* touchLabel){
            touchLabel->setVisible(false);
        });

        // For each touch in the touches vector, set a Label to display at it's location and make it visible
        for(int i = 0; i < touches.size(); ++i){
            labelTouchLocations[i]->setPosition(touches[i]->getLocation());
            labelTouchLocations[i]->setVisible(true);
            labelTouchLocations[i]->setString("Touched");
        }
    };

    _eventDispatcher->addEventListenerWithSceneGraphPriority(eventListener, this);

    return true;
}

 

Here is the code running on my iPad with multiple fingers touched:

IMG_0189

 

Granted, not the most exciting screen shot ever, but as you can see, each location the user touch, a label is printed.  Let’s take a quick look at the code and see what’s happening.  At this point, most of it should be pretty familiar, so let’s just focus on the differences.

 

First you will notice I added an array of Labels MAX_TOUCH in size.  I chose 5 as frankly, that seems to be the limit of what I could register on iPad.  I had it set to 10, but it never registered more than 5, so 5 it was!  Truth of the matter is, I can’t really imagine a control scheme that used more then 5 touches being all that useful, so 5 touches seems like a reasonable limitation, even though I’m pretty certain the hardware can handle more.

 

In our init() we start off by allocating each of our labels and setting their initial visibility to invisible.  Then we create our EventListener, this time we create an EventListenerTouchAllAtOnce because we want to, well, get all the touch events at the same time.  Instead of handling onTouchBegan, we instead handle onTouchesBegan, which takes a std::vector ( careful here, as cocos2d has it’s own vector class… the peril of using namespace abuse! ) of Touch* as well as an Event*.

 

In the event of touch(es), we first loop through all of our labels and set them to invisible.  Then for each touch in the touches vector, we move a label to that position and make it visible.  Once again we register the EventListener with our node’s _eventDispatcher.

 

So, we’ve covered touch and multi-touch, what about when you want to use the mouse?  Amazingly enough there are users out there with mice with more than a single button after all! ;)

 

Handling the Mouse

 

At this point you can probably guess the code I am about to write, as the process is remarkably similar, but let’s go through it anyways.  I wont bother with the .h file, there’s nothing special in there.

 

MouseScene.cpp

#include "MouseScene.h"

USING_NS_CC;

cocos2d::Scene* MouseScene::createScene()
{
    auto scene = Scene::create();
    auto layer = MouseScene::create();
    scene->addChild(layer);

    return scene;
}

bool MouseScene::init()
{
    if ( !Layer::init() )
    {
        return false;
    }   

   auto listener = EventListenerMouse::create();
   listener->onMouseDown = [](cocos2d::Event* event){

      try {
         EventMouse* mouseEvent = dynamic_cast<EventMouse*>(event);
         mouseEvent->getMouseButton();
         std::stringstream message;
         message << "Mouse event: Button: " << mouseEvent->getMouseButton() << "pressed at point (" <<
            mouseEvent->getLocation().x << "," << mouseEvent->getLocation().y << ")";
         MessageBox(message.str().c_str(), "Mouse Event Details");

      }
      catch (std::bad_cast& e){
         // Not sure what kind of event you passed us cocos, but it was the wrong one
         return;
      }
   };

   listener->onMouseMove = [](cocos2d::Event* event){
      // Cast Event to EventMouse for position details like above
      cocos2d::log("Mouse moved event");
   };

   listener->onMouseScroll = [](cocos2d::Event* event){
      cocos2d::log("Mouse wheel scrolled");
   };

   listener->onMouseUp = [](cocos2d::Event* event){
      cocos2d::log("Mouse button released");
   };

   _eventDispatcher->addEventListenerWithFixedPriority(listener, 1);

    return true;
}

 

Now run it, scroll the mouse wheel a couple times, click and you will see:

image

 

Yeah… not really exciting either.  As you can see, when you click a mouse the button is returned as a number.  Left button is 0, middle is 1, right is 2, etc.  The code is all very familiar except we use a EventListenerMouse this time and handle onMouseDown, onMouseUp, onMouseMove and onMouseScroll.  The only other thing of note is you need to cast the provided Event pointer to a EventMouse pointer to get access to the mouse details.

 

With the exception of gestures, that should pretty much cover all of your mouse and touch needs.  Gesture’s arent actually supported out of the box, but extensions exist.  Additionally, all mouse and touch events contain delta information as well as data on the previous touch/click, which should make rolling your own fairly simple.

 

Programming , ,

1. October 2014

 

 

Now that we have Cocos2d-x installed and configured and our project created, we are going to take a look at basic graphics operations.  This tutorial assumes you ran through the prior part and created a project already.  I am going to assume you have a working AppDelegate, so I will only focus on creating a new scene object.   The only changes you should have to make are to change your delegate to #include a different file and change the type of scene you call createScene() on.

 

Ok, let’s jump right in with with a simple example.  First we are going to need an image to draw.  Personally I am going to use this somewhat… familiar image:

 

decepticon

 

It’s 400x360, with a transparent background named decepticon.png.  Of course you can use whatever image you want.  Just be certain to add the image to the resources directory of your project.

 

image

 

Ok, now the code to display it.

 

GraphicsScene.h

#pragma once

#include "cocos2d.h"

class GraphicsScene : public cocos2d::Layer
{
public:
    static cocos2d::Scene* createScene();
    virtual bool init();  
    CREATE_FUNC(GraphicsScene);
};

 

GraphicsScene.cpp

#include "GraphicsScene.h"

USING_NS_CC;

Scene* GraphicsScene::createScene()
{
    auto scene = Scene::create();
    auto layer = GraphicsScene::create();
   scene->addChild(layer);
    
    return scene;
}

bool GraphicsScene::init()
{
    if ( !Layer::init() )
    {
        return false;
    }
    
    auto sprite = Sprite::create("decepticon.png");
    sprite->setPosition(0, 0);
   
    this->addChild(sprite, 0);
    
    return true;
}

 

Now when you run this code:

image

 

Hmmm, probably not exactly what you expected to happen, but hey, congratulations, you just rendered your first sprite!

 

So, what exactly is happening here?  Well after we created our sprite we called:

sprite->setPosition(0, 0);

This is telling the sprite to position itself at the pixel location (0,0).  There are two things we can take away from the results.

 

1- The position (0,0) is at the bottom left corner of the screen.

2- By default, the position of a sprite is relative to it’s own center point.

 

Which way is up?

 

One of the most confusing things when working in 2D graphics is dealing with all the various coordinate systems.  There are two major approaches to dealing with locations in 2D, having the location (0,0) at the top left of the screen and having the location (0,0) at the bottom left of the screen.  This point is referred to as the Origin.  It is common for UI systems, the most famous of which being Windows, to set the origin at the top left of the screen.  It is also most common for graphics files to store their image data starting from the top left pixel, but by no means is this universal.  On the other hand OpenGL and most mathematicians treat the bottom left corner of the screen as the origin.  If you stop and think about it, this approach makes a great deal of sense.

Think back to your high school math lessons ( unless of course you are in high school, in which case pay attention to your math lessons! ) and you will no doubt have encountered this graphic.

cartPlane

This is a graphic of the Cartesian plane and it is pretty much the foundation of algebra.  As you can clearly see, the positive quadrant ( the quarter of that graph with both positive x and y values ) is in the top right corner.  Quite clearly then, to a mathematician the value (0,0) is at the bottom left corner of the top right quadrant.

 

There are merits to both approaches of course, otherwise there would be only one way of doing things!

 

This will of course lead to some annoying situations, where one API for example delivers touch coordinates in UI space, relative to the top left corner, or when you load a texture that is inverted to what you actually wanted.

 

Fortunately, Cocos2d-x provides functionality to make these annoyances a little bit less annoying.  Just be aware going forward, that coordinate systems can and do change!  Also be aware, unlike some frameworks, Cocos2d-x does *NOT* allow you to change the coordinate system.  The origin in Cocos2d-x is always at the bottom left corner.

 

Sometimes positioning relative to the middle can be ideal, especially when dealing with rotations.  However, sometimes you want to position relative to another point, generally using the top left or bottom left corner.  This is especially true if for example you want to, say… align the feet of a sprite to the top of a platform.  Changing the way a sprite ( or any Node ) is positioned is extremely simple in Cocos2d-x.  This is done using something called an Anchor Point.  Simply change the code like so:

 

   auto sprite = Sprite::create("decepticon.png");
   sprite->setAnchorPoint(Vec2(0, 0));
   sprite->setPosition(0, 0);

 

And presto!

image

 

Now our sprite is positioned relative to it’s bottom left corner.  However, setAnchorPoint() might not take the parameters you expect it to.  Yes, you are passing it in x and y coordinates that represent the location on the sprite to perform transforms relative to.  However, we are dealing with yet another coordinate system here, sometimes referred to as Normalized Device Coordinates (NDC).  These values are represented by two numbers, one for x and one for y, from 0 to 1, and they are a position within the sprite.

 

Sprite?

 

If you are new to game programming, the expression "sprite" might be new to you.   The expression was termed way back in 1981 by a Texas Instruments engineer describing the functionality of the TMS9918 chip.  Essentially a sprite was a bitmap image with hardware support to be movable.  Early game hardware could handle a few sprites, often used to represent the player and enemies in the world.  In real world examples, in Super Mario Brothers, Mario, the mushrooms, coins and such would be sprites.

These days, “Sprite” is basically a bitmap image ( or portion of a bitmap, we’ll see this later ) along with positional information.  These days the concept of hardware sprites doesn’t really exist anymore.

 

I am making this sound entirely more complicated than it actually is.  Just be aware the value (0,0) is the bottom left corner of the sprite, (1,1) is the top right of the sprite and (0.5,0.5) would be the mid point of the sprite.  These coordinate system is extremely common in computer graphics and is used heavily in shaders.  You may have heard of UV coordinates, for positioning textures on 3D objects.  UV coordinates are expressed this way. 

 

Therefore, if you want to position the sprite using it’s mid point, you would instead do:

sprite->setAnchorPoint(Vec2(0.5, 0.5));

 

Another important concept to be aware of is a sprite’s positioning is relative to it’s parent.  Up until now our sprite’s parent was our layer, let’s look at an example with a different Node as a parent.  This time, we are going to introduce another sprite, this one using this image:

autobot

 

 

It’s a 200x180 transparent image named autobot.png.  Once again, add it to the resources folder of your project.  Now let’s change our code slightly:

bool GraphicsScene::init()
{
    if ( !Layer::init() )
    {
        return false;
    }
    
    auto sprite = Sprite::create("decepticon.png");
    auto sprite2 = Sprite::create("autobot.png");
    sprite->setAnchorPoint(Vec2(0.0,0.0));
    sprite2->setAnchorPoint(Vec2(0.0, 0.0));

    sprite->addChild(sprite2);
   
    sprite->setPosition(100, 100);
    sprite2->setPosition(0, 0);
   
    this->addChild(sprite, 0);
    
    return true;
}

 

And when we run this code:

image

 

There’s a couple very important things being shown here.  Until this point, we’ve simply added our Sprite to our Layer, but you can actually parent any Node to any other Node object.  As you can see, both Sprite and Layer inherit from Node, and Node’s form the backbone of a very important concept called a scene graph. 

 

One very important part of this relationship is the child’s position is relative to it’s parent.  Therefore sprite2’s (0,0) position is relative to the origin of sprite1 ( not the anchor, origin and anchor are not the same thing and only anchors can be changed ), so moving sprite2 also moves sprite1.  This is a great way to create hierarchies of nodes, for example you could make a tank out of a sprite for it’s base and another representing it’s turret.  You could then move the base of the tank and the turret would move with it, but the turret itself could rotate independently.

Scenegraph

 

A scenegraph is a pretty simple concept. It's basically the data structure used to hold the contents of your game. In Cocos2d-x, the scene graph is simply a tree of Node dervived objects. There exists Scene node that is pretty much an empty node with the intention of adding other nodes to it. You then add various nodes to it, nodes to those nodes, etc. An overview of how you can use this system in Cocos2d-x is available here.

 

So what do you do when you want to get a node’s position in the world, not relative to it’s parent?  Fortunately Node has that functionality built in:

Vec2 worldPosition = sprite2->convertToWorldSpace(sprite2->getPosition());

 

worldPosition’s value would be (100,100).  There is also an equivalent function for getting a world space coordinate in node space.

 

So, in summary:

  • The world is composed of Node objects, including Scene, Sprite and Layer
  • The screen origin is at the bottom left corner of the screen, always
  • Nodes are positioned, scaled and rotated relative to their anchor point
  • The default anchor point of a sprite is (0.5,0.5), which is it’s mid point
  • Anchor points are defined with a value from (0,0) to (1,1), with (0,0) being bottom left corner and (1,1) being the top right
  • Nodes can have other nodes as children.  When the parent moves, the child moves with it
  • Child node’s origin is the bottom left corner of their parent
  • Anchor point and origin are not the same thing
  • Sprites can only have other sprites as children ( EDIT – No longer true!  There can however be some performance ramifications. Will discuss later )

 

So that covers the basics of dealing with graphics.  Now that we know how to position and parent nodes, next time we will look at something a bit more advanced.

 

Programming , ,

Month List

Popular Comments

Blender roadmap for 2.7, 2.8 and beyond
Subscribe to GameFromScratch on YouTube Support GameFromScratch on Patreon


Home > Art, News >

17. June 2013

I am a big proponent of Blender so I am always quite interested in how it is going to develop.  Recent releases have been all about bringing a number of projects that have been in the works for years back into the fold.  Functionality like BMesh and the Cycles renderer are now part of the core package and Blender is vastly improved as a result.  Now that most of that work is complete, Blender started looking toward the future and released their roadmap of upcoming features.

 

The nutshell version:

2.6x

  • For 2.68 and 2.69 we strictly keep compatibility and keep focusing on stability for Blender.
  • Anything potentially unstable or breaking compatibility should go to a 2.7 branch
  • If needed, we can do a couple of 2.69 updates (a b c d) to merge in bug fixes only.

 

2.7

  • Move to OpenGL 2.1 minimal (means: UI/tools can be designed needing it, like offscreen drawing)
  • Depsgraph refactor, including threaded updates
  • Fix our duplicator system, animation proxy (for local parts of linked/referenced data)
  • Redesign 3D viewport drawing (full cleanup of space_view3d module)
  • Work on cpu-based selection code for viewport
  • Sequencer rewrite
  • Asset manager, better UI and tools for handling linkage
  • Python “Custom Editor” api (including better Python support for event handlers, notifiers).
  • UI: refresh our default

 

2.8

  • New “unified physics” systems, using much more of Bullet, unification of point caches (Alembic).
  • Particle nodes (could co-exist for a while with old particles though)
  • Nodification of more parts of Blender (modifiers, constraints)
  • Game engine… (see below)
  • OpenGL 3.0?

 

Blender Game Engine

Or more radically worded: I propose to make the GE to become a real part of Blender code – to make it not separated anymore. This would make it more supported, more stable and (I’m sure) much more fun to work on as well.

Instead of calling it the “GE” we would just put Blender in “Interaction mode”. Topics to think of:

  • Integrate the concept of “Logic” in the animation system itself. Rule or behavior based animation is a great step forward for animation as well (like massive anims, or for extras).
  • Support of all Blender physics.
  • Optimizing speed for interactive playback will then also benefit regular 3d editing (and vice versa)
  • Singular Python API for logic scripting
  • Ensure good I/O integration with external game engines, similar to render engines.

What should then be dropped is the idea to make Blender have an embedded “true” game engine. We should acknowledge that we never managed to make something with the portability and quality of Unreal or Crysis… or even Unity3D. And Blender’s GPL license is not helping here much either.

On the positive side – I think that the main cool feature of our GE is that it was integrated with a 3D tool, to allow people to make 3D interaction for walkthroughs, for scientific sims, or game prototypes. If we bring back this (original) design focus for a GE, I think we still get something unique and cool, with seamless integration of realtime and ‘offline’ 3D.

 

All told, nothing earth shattering, but one heck of a big change in store for Blender game engine.

Art, News ,

blog comments powered by Disqus

Month List

Popular Comments