Cocos2d-x Tutorial Series: Handling Touch and Mouse Input

In this part of the Cocos2d-x tutorial series we are going to look at how to handle touch and mouse events .  First you should be aware that by default Cocos2d-x treats a mouse left click as a touch, so if you only have simple input requirements and don’t require multi-touch support ( which is remarkably different to perform with a single mouse! ), you can simply implement just touch handlers.  This part is going to be code heavy, as we actually have 3 different tasks to cover here ( touch, multi-touch and mouse ), although all are very similar in overall behaviour.

Let’s jump in with an ultra simple example.  Once again, I assume you’ve done the earlier tutorial parts and already have an AppDelegate.

Handle Touch/Click Events

TouchScene.h

#pragma once

#include "cocos2d.h"

class TouchScene : public cocos2d::Layer
{
public:
    static cocos2d::Scene* createScene();
    virtual bool init();  

    virtual bool onTouchBegan(cocos2d::Touch*, cocos2d::Event*);
    virtual void onTouchEnded(cocos2d::Touch*, cocos2d::Event*);
    virtual void onTouchMoved(cocos2d::Touch*, cocos2d::Event*);
    virtual void onTouchCancelled(cocos2d::Touch*, cocos2d::Event*);
    CREATE_FUNC(TouchScene);

private:
   cocos2d::Label* labelTouchInfo;
};

TouchScene.cpp

#include "TouchScene.h"

USING_NS_CC;

Scene* TouchScene::createScene()
{
    auto scene = Scene::create();
    auto layer = TouchScene::create();
    scene->addChild(layer);

   return scene;
}

bool TouchScene::init()
{
    if ( !Layer::init() )
    {
        return false;
    }
    
   labelTouchInfo = Label::createWithSystemFont("Touch or clicksomewhere to begin", "Arial", 30);

   labelTouchInfo->setPosition(Vec2(
      Director::getInstance()->getVisibleSize().width / 2,
      Director::getInstance()->getVisibleSize().height / 2));

   auto touchListener = EventListenerTouchOneByOne::create();

   touchListener->onTouchBegan = CC_CALLBACK_2(TouchScene::onTouchBegan, this);
   touchListener->onTouchEnded = CC_CALLBACK_2(TouchScene::onTouchEnded, this);
   touchListener->onTouchMoved = CC_CALLBACK_2(TouchScene::onTouchMoved, this);
   touchListener->onTouchCancelled = CC_CALLBACK_2(TouchScene::onTouchCancelled, this);

   _eventDispatcher->addEventListenerWithSceneGraphPriority(touchListener, this);
    
   this->addChild(labelTouchInfo);
   return true;
}

bool TouchScene::onTouchBegan(Touch* touch, Event* event)
{
   labelTouchInfo->setPosition(touch->getLocation());
   labelTouchInfo->setString("You Touched Here");
   return true;
}

void TouchScene::onTouchEnded(Touch* touch, Event* event)
{
   cocos2d::log("touch ended");
}

void TouchScene::onTouchMoved(Touch* touch, Event* event)
{
   cocos2d::log("touch moved");
}

void TouchScene::onTouchCancelled(Touch* touch, Event* event)
{
   cocos2d::log("touch cancelled");
}

Then if you run it, when you perform a touch or click:

image

As you can see, where you touch on the screen a text label is displayed.  Looking in the background of that screenshot you can see touch moved events are constantly being fired and logged.  Additionally touch ended events are fired when the user removes their finger ( or releases the mouse button ).

Now let’s take a quick look at the code.  Our header file is pretty straight forward.  In addition to the normal methods, we add a quartet of handler functions for handling the various possible touch events.  We also add a member variable for our Label used to draw the text on the screen.

In the cpp file, we create the scene like normal.  In init() we create an EventListener of type EventListenerTouchOneByOne, which predictably handles touches, um, one by one ( as opposed to all at once, which we will see later ).  We then map each possible event, touch began, touch end, touch cancelled and touch moved, to their corresponding function handler using the macro CC_CALLBACK_2, passing  the function to execute and the context ( or target ).  This too will make sense later, so hold on there.  One thing to watch out for here, and one point of confusion for me, onTouchBegan has a different signature than every other event, returning a bool.  I am not entirely certain why this one event is handled differently, seems like a bad idea to me personally, but there may be a good design reason I am unaware of.

The last thing we do is register our EventListener to receive events.  This is done with a call to Node’s protected member _eventListener.  We call addEventListenerWithSceneGraphPriority(), which basically means we want this event to be updated as much as possible.  We will see an example of setting a different priority level later on.

What’s this CC_CALLBACK_2 black magic?

I’m generally not a big fan of macro usage in C++. I generally believe they lead programmers to eventually turn their libraries into meta-programming languages and ultimately obfuscate the underlying code in the name of clarity. This however, is one of the exceptions to the rule. CC_CALLBACK_2, and the entire CC_CALLBACK_ family is simply a wrapper around some standard C++ code, specifically a call to std::bind. Here is the actual macro code:

#define CC_CALLBACK_0(__selector__,__target__, ...) std::bind(&__selector__,__target__, ##__VA_ARGS__)  #define CC_CALLBACK_1(__selector__,__target__, ...) std::bind(&__selector__,__target__, 
std::placeholders::_1, ##__VA_ARGS__)  #define CC_CALLBACK_2(__selector__,__target__, ...) std::bind(&__selector__,__target__, 
std::placeholders::_1, std::placeholders::_2, ##__VA_ARGS__)  #define CC_CALLBACK_3(__selector__,__target__, ...) std::bind(&__selector__,__target__, 
std::placeholders::_1, std::placeholders::_2, std::placeholders::_3, ##__VA_ARGS__)

Basically std::bind is binding for binding parameters to a function. The std::placeholders are ultimately the number of parameters your function expects. So for example, when you call CC_CALLBACK_2, you are saying that function takes two parameters, in this case a Touch* point and an Event* pointer. Similarly CC_CALLBACK_1 would expect the provided function to take a single parameter. This kind of code is incredibly common in C++11, it’s incredibly ugly, hard to read and grok and it’s easy to mistype. In these cases, macro use shines. Just be aware of what it is the macro you are calling does. Each time you encounter a macro in code, I recommend you right click and “Go to Definition” or CTRL+Click if in XCode, to see what it actually does, even if it doesn’t make complete sense.

In most of the touch handlers, we simply log that the event occurred.  In the event of a touch starting ( or click beginning ) we update the position of the label to where the user clicked and display the string “You Touched Here”.

Now let’s take a look at an example that uses lambda’s instead.  This example also goes into a bit more detail of what’s in that Touch pointer we are being passed.  The header file is basically the same, except there are no onTouch____ functions.

Handling Touch Events using Lambdas and dealing with Touch coordinates

TouchScene.cpp

#include "TouchScene.h"

USING_NS_CC;

Scene* TouchScene::createScene()
{
    auto scene = Scene::create();
    auto layer = TouchScene::create();
    scene->addChild(layer);

    return scene;
}

bool TouchScene::init()
{
    if ( !Layer::init() )
    {
        return false;
    }
    
   auto sprite = Sprite::create("HelloWorld.png");
   sprite->setPosition(Vec2(Director::getInstance()->getVisibleSize().width / 2,
      Director::getInstance()->getVisibleSize().height / 2));

    // Add a "touch" event listener to our sprite
   auto touchListener = EventListenerTouchOneByOne::create();
   touchListener->onTouchBegan = [](Touch* touch, Event* event) -> bool {

      auto bounds = event->getCurrentTarget()->getBoundingBox();

      if (bounds.containsPoint(touch->getLocation())){
         std::stringstream touchDetails;
         touchDetails << "Touched at OpenGL coordinates: " << 
            touch->getLocation().x << "," << touch->getLocation().y << std::endl <<
            "Touched at UI coordinate: " << 
            touch->getLocationInView().x << "," << touch->getLocationInView().y << std::endl <<
            "Touched at local coordinate:" <<
            event->getCurrentTarget()->convertToNodeSpace(touch->getLocation()).x << "," <<  
            event->getCurrentTarget()->convertToNodeSpace(touch->getLocation()).y << std::endl <<
            "Touch moved by:" << touch->getDelta().x << "," << touch->getDelta().y;

            MessageBox(touchDetails.str().c_str(), "Touched");
         }
      return true;
      };

   Director::getInstance()->getEventDispatcher()->addEventListenerWithSceneGraphPriority(touchListener,sprite);
   this->addChild(sprite, 0);
    
    return true;
}

Now when you run it:

image

In this example, the touch event will only fire if the user clicked on the Sprite in the scene.  Notice the first line in the onTouchBegan handler I call event->getCurrentTarget()?  This is where the context becomes important.  In the line:

Director::getInstance()->getEventDispatcher()->addEventListenerWithSceneGraphPriority(touchListener,sprite);

The second parameter, sprite, is what determines the target of the Event.  The target is passed as a Node but can be cast if required.

Lambda?

Lambda’s are a new feature of C++ and they are probably something you will love or hate. If you come from C++ or C# you will probably find them long over due, I certainly do!
Lambda is a scary sounding expression coming from the scary looking symbol Λ. In the world of mathematics, Lambda calculus basically gives math the ability to define functions, something we as programmers can certainly appreciate. In the world of programming, it’s nowhere near as scary, a lamdba expression can also be thought of as an anonymous function. In simple terms, it allows you to create a nameless function where you need it. As you can see from this example, it allows you to put event handling logic where it makes most sense, instead of spliting it out into a seperate function. It is also a godsend when you want to pass a function as a parameter, a very common task in the C++ standard libraries.
The syntax of C++ lambda’s is pretty ugly, but they are certainly a valuable addition to the language. Most importantly, they can often make your code easier to express and as such, easier to comprehend and maintain. Learn to love the lamdba and the lambda will learn to love you. Maybe.

In this example, we use the target node to only handle clicks that happen within the bounds of our Sprite Node.  This is done by testing if the touch location is within the bounding box of the node.  If it is, we display a number of details in a message box.  Remember back in this tutorial part where I said there are multiple coordinate systems, this is a perfect example.  As you can see from the message box above, getLocation() and getLocationInView() return different values, one relative to the top left corner of the screen, while the other is relative to the bottom left corner of the screen. 

Sometimes as well you want to know where the click occurred relative to the node.  Such as in the sample above, the local coordinate is the position the click occurred relative to the node’s origin.    In order to calculate this location we use the helper function convertToNodeSpace().  One final thing you may notice is I registered the EventListener with Director() instead of _eventListener.  This was the old way of doing things and I did it this way for a couple reasons.  First, to show that you can.  Second, because _eventListener is a protected member, I would only have access to it if I derived my own Sprite object.

Now let’s take a look at a multi-touch example.

Dealing with Multi-touch

Multi-touch works pretty much the same way, just with a separate set of event handlers.  There are a few catches however.  The big one is iOS.  Out of the box, Android just works.  iOS however requires you to make a small code change to enable multitouch support.  Don’t worry, it’s a simple process. 

In your project, locate the directory /proj.ios_mac/ios and open the file AppController.mm.  Then add the following line:

AppControllerMM

Simply add the line [eaglView setMultipleTouchEnabled:YES]; somewhere after the creation of eaglView.  Now multitouch should work in your iOS application, let’s look at some code:

MultiTouchScene.h

pragma once

#include "cocos2d.h"

class MultiTouch : public cocos2d::Layer
{

    public:
        static cocos2d::Scene* createScene();

        virtual bool init();
        CREATE_FUNC(MultiTouch);
    private:
        const static int MAX_TOUCHES = 5;

    protected:
        cocos2d::Label* labelTouchLocations[MAX_TOUCHES];

};

MultiTouchScene.cpp

#include "MultiTouchScene.h"

USING_NS_CC;

Scene* MultiTouch::createScene()
{
    auto scene = Scene::create();
    auto layer = MultiTouch::create();
    scene->addChild(layer);

    return scene;
}

bool MultiTouch::init()
{
    if ( !Layer::init() )
    {
        return false;
    }

    // Create an array of Labels to display touch locations and add them to this node, defaulted to invisible
    for(int i= 0; i < MAX_TOUCHES; ++i) {
        labelTouchLocations[i] = Label::createWithSystemFont("", "Arial", 42);
        labelTouchLocations[i]->setVisible(false);
        this->addChild(labelTouchLocations[i]);
    }

    auto eventListener = EventListenerTouchAllAtOnce::create();

    //  Create an eventListener to handle multiple touches, using a lambda, cause baby, it's C++11
    eventListener->onTouchesBegan = [=](const std::vector<Touch*>&touches, Event* event){

        // Clear all visible touches just in case there are less fingers touching than last time
        std::for_each(labelTouchLocations,labelTouchLocations+MAX_TOUCHES,[](Label* touchLabel){
            touchLabel->setVisible(false);
        });

        // For each touch in the touches vector, set a Label to display at it's location and make it visible
        for(int i = 0; i < touches.size(); ++i){
            labelTouchLocations[i]->setPosition(touches[i]->getLocation());
            labelTouchLocations[i]->setVisible(true);
            labelTouchLocations[i]->setString("Touched");
        }
    };

    _eventDispatcher->addEventListenerWithSceneGraphPriority(eventListener, this);

    return true;
}

Here is the code running on my iPad with multiple fingers touched:

IMG_0189

Granted, not the most exciting screen shot ever, but as you can see, each location the user touch, a label is printed.  Let’s take a quick look at the code and see what’s happening.  At this point, most of it should be pretty familiar, so let’s just focus on the differences.

First you will notice I added an array of Labels MAX_TOUCH in size.  I chose 5 as frankly, that seems to be the limit of what I could register on iPad.  I had it set to 10, but it never registered more than 5, so 5 it was!  Truth of the matter is, I can’t really imagine a control scheme that used more then 5 touches being all that useful, so 5 touches seems like a reasonable limitation, even though I’m pretty certain the hardware can handle more.

In our init() we start off by allocating each of our labels and setting their initial visibility to invisible.  Then we create our EventListener, this time we create an EventListenerTouchAllAtOnce because we want to, well, get all the touch events at the same time.  Instead of handling onTouchBegan, we instead handle onTouchesBegan, which takes a std::vector ( careful here, as cocos2d has it’s own vector class… the peril of using namespace abuse! ) of Touch* as well as an Event*.

In the event of touch(es), we first loop through all of our labels and set them to invisible.  Then for each touch in the touches vector, we move a label to that position and make it visible.  Once again we register the EventListener with our node’s _eventDispatcher.

So, we’ve covered touch and multi-touch, what about when you want to use the mouse?  Amazingly enough there are users out there with mice with more than a single button after all! 😉

Handling the Mouse

At this point you can probably guess the code I am about to write, as the process is remarkably similar, but let’s go through it anyways.  I wont bother with the .h file, there’s nothing special in there.

MouseScene.cpp

#include "MouseScene.h"

USING_NS_CC;

cocos2d::Scene* MouseScene::createScene()
{
    auto scene = Scene::create();
    auto layer = MouseScene::create();
    scene->addChild(layer);

    return scene;
}

bool MouseScene::init()
{
    if ( !Layer::init() )
    {
        return false;
    }   

   auto listener = EventListenerMouse::create();
   listener->onMouseDown = [](cocos2d::Event* event){

      try {
         EventMouse* mouseEvent = dynamic_cast<EventMouse*>(event);
         mouseEvent->getMouseButton();
         std::stringstream message;
         message << "Mouse event: Button: " << mouseEvent->getMouseButton() << "pressed at point (" <<
            mouseEvent->getLocation().x << "," << mouseEvent->getLocation().y << ")";
         MessageBox(message.str().c_str(), "Mouse Event Details");

      }
      catch (std::bad_cast& e){
         // Not sure what kind of event you passed us cocos, but it was the wrong one
         return;
      }
   };

   listener->onMouseMove = [](cocos2d::Event* event){
      // Cast Event to EventMouse for position details like above
      cocos2d::log("Mouse moved event");
   };

   listener->onMouseScroll = [](cocos2d::Event* event){
      cocos2d::log("Mouse wheel scrolled");
   };

   listener->onMouseUp = [](cocos2d::Event* event){
      cocos2d::log("Mouse button released");
   };

   _eventDispatcher->addEventListenerWithFixedPriority(listener, 1);

    return true;
}

Now run it, scroll the mouse wheel a couple times, click and you will see:

image

Yeah… not really exciting either.  As you can see, when you click a mouse the button is returned as a number.  Left button is 0, middle is 1, right is 2, etc.  The code is all very familiar except we use a EventListenerMouse this time and handle onMouseDown, onMouseUp, onMouseMove and onMouseScroll.  The only other thing of note is you need to cast the provided Event pointer to a EventMouse pointer to get access to the mouse details.

With the exception of gestures, that should pretty much cover all of your mouse and touch needs.  Gesture’s arent actually supported out of the box, but extensions exist.  Additionally, all mouse and touch events contain delta information as well as data on the previous touch/click, which should make rolling your own fairly simple.


Previous PartTable Of ContentsNext Part
Scroll to Top