Subscribe to GameFromScratch on YouTube Support GameFromScratch on Patreon
22. November 2014

logo

 

The Urho3D C++ cross platform game engine just released version 1.32.  The following is the change log from this release:

 

 

  • Finalized Urho2D functionality, including 2D physics using Box2D, sprite animation and tile maps
  • Threaded background resource loading. Must be manually triggered via ResourceCache or by loading a scene asynchronously
  • Attribute and material shader parameter animation system
  • Customizable onscreen joystick for mobile platforms. Used in examples
  • Touch camera control in examples on mobile platforms
  • Touch emulation by mouse
  • Multi-touch UI drag support
  • Consistent touch ID’s across platforms
  • Absolute, relative and wrap modes for the operating system mouse cursor
  • Support for connecting & removing joysticks during runtime
  • Negative light & light brightness multiplier support
  • Transform spaces for Node’s translate, rotate & lookat functions
  • Scrollable console
  • Selectable console command interpreter (AngelScript, Lua, FileSystem)
  • Touch scroll in ScrollView & ListView
  • UI layout flex scale mode
  • Custom sound streams from C++
  • LogicComponent C++ base class with virtual update functions similar to ScriptObject
  • Signed distance field font support
  • JSON data support
  • Matrix types in Variant & XML data
  • Intermediate rendertarget refactoring: use viewport size to allow consistent UV addressing
  • ParticleEmitter refactoring: use ParticleEffect resource for consistency with ParticleEmitter2D and more optimal net replication
  • Expose LZ4 compression functions
  • Support various cube map layouts contained in a single image file
  • Configurable Bullet physics stepping behavior. Can use elapsed time limiting, or a variable timestep to use less CPU
  • Default construct math objects to zero / identity
  • Mandatory registration for remote events. Check allowed event only when receiving
  • Teapot & torus builtin objects
  • FXAA 3.11 shader
  • Triangle rendering in DebugRenderer (more efficient than 3 lines)
  • Material/texture quality and anisotropy as command line options and engine startup parameters
  • Spline math class, which the SplinePath component uses
  • Console auto-show on error
  • DrawableProxy2D system for optimizing 2D sprite drawing
  • Possibility to decouple BorderImage border UV’s from element size
  • Editor & NinjaSnowWar resources split into subdirectories
  • UI hover start & end events
  • UI drag cancel by pressing ESC
  • Allowed screen orientations can be controlled. Effective only on iOS
  • Rendering sceneless renderpaths
  • Define individual material passes as SM3-only
  • Support for copying ListView text to system clipboard
  • Async system command execution
  • Generic attribute access for Lua script objects
  • Use Lua functions directly as event subscribers
  • Touch gesture recording and load/save
  • AssetImporter option to allow multiple import of identical meshes
  • Automatically create a physics world component to scene when necessary
  • GetSubimage function in the Image class
  • Possibility to clone existing components from another scene node
  • Improve terrain rendering on mobile devices
  • Refactoring of camera facing modes in BillboardSet & Text3D
  • Additive alpha techniques for particle rendering
  • Possibility to use CustomGeometry component for physics triangle mesh collision
  • Access to 2D node coordinates for convenience when using 2D graphics features
  • Save embedded textures in AssetImporter
  • Use best matching fullscreen resolution if no exact match
  • Use SDL_iPhoneSetAnimationCallback instead of blocking main loop
  • Allow fast partial terrain updates by modifying the heightmap image
  • API for setting image pixels by integer colors
  • Refactor to remove the separate ShortStringHash class
  • Deep clone functionality in Model resource
  • Zone can define a texture which is available to shaders. Not used by default
  • Allow logging from outside the main thread
  • Log warnings for improper attempts to use events from outside main thread
  • Improved CustomGeometry dynamic updates
  • ConvexCast function in PhysicsWorld
  • Screen to world space conversion functions in Viewport class
  • Allow sending client rotation to server in addition to position
  • Allow accessing and modifying the engine’s next timestep
  • DeepEnabled mechanism for disabling node or UI element hierarchies and then restoring their own enabled state
  • Allow to prevent closing a modal window with ESC
  • Per-viewport control of whether debug geometry should render
  • Optional interception of resource requests
  • Readded optional slow & robust mode to AreaAllocator
  • Optionally disable RigidBody mass update to allow fast adding of several CollisionShape components to the same node
  • Runtime synchronization of resource packages from server to client
  • Disable multisample antialiasing momentarily during rendering. Used by default for UI & quad rendering
  • Glyph offset support in Font class
  • Font class internal refactoring
  • Allow to create AngelScript script objects by specifying the interface it implements
  • Window position startup parameters
  • Functions to get time since epoch & modify file’s last modified time
  • Optionally auto-disable child elements of a scroll view when touch scrolling
  • Allocate views permanently per viewport to allow querying for drawables, lights etc. reliably
  • Allow to specify material techniques/passes that should not be used on mobile devices
  • Reduced default shadow mapping issues on mobile devices
  • Minor rendering optimizations
  • Build system: possibility to build Urho3D without networking or 2D graphics functionality
  • Build system: improved generated scripting documentation
  • Build system: improved support for IDE’s in CMake scripts
  • Build system: support up to Android NDK r10c and 64-bit ABIs
  • Build system: numerous other improvements
  • Editor: resource browser
  • Editor: spawn window for random-generating objects
  • Editor: allow either zoom or move from mouse wheel
  • Editor: locate object by doubleclicking node in hierarchy
  • Editor: take screenshots with F11, camera panning
  • Editor: button in value edit fields that allows editing by mouse drag
  • Updated SDL to 2.0.3.
  • Updated AngelScript to 2.29.1
  • Updated assimp
  • Updated Recast/Detour
  • Fix MinGW build issues
  • Fix techniques referring to wrong shaders
  • Fix Node::LookAt() misbehaving in certain situations
  • Fix resize event not reporting correct window size if window is maximized at start
  • Fix PhysicsWorld::GetRigidBodies() not using collision mask
  • Fix zone misassignment issues
  • Fix Lua not returning correctly typed object for UIElement::GetChild() & UIElement::GetParent()
  • Fix uninitialized variables in 2D physics components
  • Fix quad rendering not updating elapsed time uniform
  • Fix forward rendering normal mapping issues by switching calculations back to world space
  • Fix wrong logging level on Android
  • Fix multiple subscribes to same event on Lua
  • Fix missing Octree update in headless mode
  • Fix crash when using FreeType to access font kerning tables
  • Fix ReadString() endless loop if the string does not end
  • Fix shadow mapping on OS X systems with Intel GPU
  • Fix manually positioned bones being serialized properly
  • Fix file checksum calculation on Android
  • Fix accelerometer input on Android when device is flipped 180 degrees
  • Fix missing or misbehaving Lua bindings
  • Fix crashes in physics collision handling when objects are removed during it
  • Fix shader live reload if previous compile resulted in error
  • Fix named manual textures not recreating their GPU resource after device loss
  • Fix skeleton-only model not importing in AssetImporter
  • Fix terrain raycast returning incorrect position/normal
  • Fix animation keyframe timing in AssetImporter if start time is not 0
  • Fix storing Image resources to memory unnecessarily during cube/3D texture loading
  • Fix to node transform dirtying mechanism and the TransformChanged() script function
  • Fix returned documents directory not being writable on iOS
  • Fix click to emptiness not closing a menu
  • Fix FileWatcher notifying when file was still being saved. By default delay notification 1 second
  • Fix .txml import in the editor
  • Fix erroneous raycast to triangles behind the ray
  • Fix crash when multiple AnimatedModels exist in a node and the master model is destroyed
  • Fix missing Matrix4 * Matrix3x4 operator in script
  • Fix various compile warnings that leak to applications using Urho3D
  • Fix DebugHud update possibly being late one frame
  • Fix various macros not being usable outside Urho3D namespace
  • Fix erroneous layout with wordwrap text elements
  • Fix debug geometry rendering on flipped OpenGL viewports
  • Fix kNet debug mode assert with zero sized messages
  • Fix not being able to stop and restart kNet server
  • Fix AreaAllocator operation
  • Fix possible crash with parented rigidbodies
  • Fix missing network delta update if only user variables in a Node have been modified
  • Fix to only search for June 2010 DirectX SDK, as earlier SDK’s will fail
  • Fix wrong search order of added resource paths
  • Fix global anisotropic filtering on OpenGL
  • Fix animation triggers not working if trigger is at animation end
  • Fix CopyFramebuffer shader name not being used correctly on case-sensitive systems
  • Fix UI elements not receiving input when the window containing them is partially outside the screen to the left
  • Fix occlusion rendering not working with counterclockwise triangles
  • Fix material shader parameter animations going out of sync with other animations when the object using the material is not in view
  • Fix CPU count functions on Android

 

You can download the library here.

 

The project homepage is available here.

Programming


16. October 2014

 

GameFromScratch has a long running “A closer look at” series of articles that take a deep dive into a particular game engine.  Hopefully by the end, you the reader will have an idea if this engine is the right fit for you or not.logo

 

Today we are looking at the Urho3D engine, a game engine that somehow flew below my radar for a very long time.  It started life as Bofh3D but apparently was renamed after a tyrannical fish king ( seriously… and thus the fish in the logo to your right! ) According to Google Translate, Urho is Finnish for “Brave”, while 3D… I hope you know that one by this point!  The 3D is a bit of a misnomer though, as like many modern 3D engines, there is a 2D component as well, so you can just as easily make 2D games if that’s what you want to do.

 

Let’s start straight away with their own description:

 

Urho3D is a lightweight, cross-platform 2D and 3D game engine implemented in C++ and released under the MIT license. Greatly inspired by OGRE and Horde3D.

 

Features

  • Direct3D9 or OpenGL rendering (Shader Model 2, OpenGL 2.0 or OpenGL ES 2.0 required as minimum)
  • HLSL or GLSL shaders + caching of HLSL bytecode
  • Configurable rendering pipeline. Default implementations for forward, light pre-pass and deferred rendering
  • Component based scene model
  • Skeletal (with hardware skinning), vertex morph and node animation
  • Automatic instancing on SM3 capable hardware
  • Point, spot and directional lights
  • Shadow mapping for all light types; cascaded shadow maps for directional lights
  • Particle rendering
  • Geomipmapped terrain
  • Static and skinned decals
  • Auxiliary view rendering (reflections etc.)
  • Geometry, material & animation LOD
  • Software rasterized occlusion culling
  • Post-processing
  • HDR renderingv1.31
  • 2D sprites and particles that integrate into the 3D scenev1.31
  • Task-based multithreading
  • Hierarchical performance profiler
  • Scene and object load/save in binary and XML format
  • Keyframe animation of object attributesnew
  • Background loading of resourcesnew
  • Keyboard, mouse, joystick and touch input (if available)
  • Cross-platform support using SDL 2.0 (currently runs on Windows, Linux, Mac OS X, Android, iOS, and Raspberry Piv1.3)
  • Physics using Bullet
  • 2D physics using Box2Dnew
  • Scripting using AngelScript
  • Alternative script interface using Luav1.3 or LuaJITv1.31 (on Windows, Linux, Mac OS X, Android, and Raspberry Pi)
  • Networking using kNet + possibility to make HTTP requestsv1.3
  • Pathfinding using Recast/Detourv1.23
  • Image loading using stb_image + DDS / KTX / PVR compressed texture support
  • 2D and “3D” audio playback, Ogg Vorbis support using stb_vorbis + WAV format support
  • TrueType font rendering using FreeType, AngelCode bitmap fonts are also supported
  • Unicode string support
  • Inbuilt UI system
  • Scene editor and UI-layout editor implemented in script with undo & redo capabilities
  • Model/scene/animation/material import from formats supported by Open Asset Import Library
  • Alternative model/animation import from OGRE mesh.xml and skeleton.xml files
  • Supported build tools and IDEs: Visual Studio, Xcode, Eclipse, CodeBlocks, GCC, LLVM/Clang, MinGW-W64
  • Supports both 32-bit and 64-bitv1.3 build
  • Build as single external libraryv1.3 (can be linked against statically or dynamically)

 

The line greatly inspired by Ogre3D seems incredibly accurate to me.  On my initial explorations, that is what it most reminded me of, and that is certainly not an insult.  My nutshell description of Urho3D is:

 

A cross platform, open source, C++ based, Lua and AngelScript scripted game engine that runs on Windows, Mac and Linux and can target all those plus iOS, Android and Raspberry Pi.

 

So the question remains, what’s the developer experience like?  Well, let’s find out!

 

The Source Code

 

Being open source, Urho3D is available on Github.

image

 

I only took a quick browse through the actual code but from what I saw, it’s clean and well written in a modern C++ style.  The project is laid out intuitively, the engine and platforms nicely decoupled and things are pretty much where you would expect them to be.  The code is fairly sparsely commented, but the things that need to be commented, are.  We will touch on the documentation a bit later on.

 

Getting Started

 

Getting started is pretty simple.  Download the source code archive, extract it and use CMake to build the project files for your platform of choice.  I was up and running in a matter of minutes, however I already had all of the required development tools installed and configured.  If you’ve never used CMake before you may be in for a bit of a fight and if something goes wrong, CMake starts to feel strangely like black magic.  For me though, it mostly just worked.  A warning though, download the master branch!  The version linked of their main page is outdated to the point that the documentation doesn’t actually work.  They really should update the official release version so that it matches their getting started manual!

 

Once unzipped, Urho3D looks something like this:

image

 

Simply run the sh or bat file appropriate to your platform and you are good to go.  One thing to be aware of up front, Urho3D has samples in both AngelScript and C++, but by default the C++ projects aren’t created by cmake.  If you want them, when calling the script, add –DURHO3D_SAMPLES=1.  Additionally, Lua support isn’t added out of the box, if you want Lua support as –DURHO3D_LUA=1.

 

So for example, to get started on Windows using Visual Studio 2013, with Lua and C++ sample support, run:

cmake_vs2013.bat –DURHO3D_SAMPLES=1 –DURHO3D_LUA=1

Now if you go into the Build directory, you will see Visual Studio ( or XCode, or Makefile, whatever you chose ) projects.

 

image

 

Simply open Urho3D.sln in Visual Studio and you are done.

 

Samples Samples and More Samples

 

This is one area where Urho3D is well represented.  There are a number of included samples, written in both AngelScript and C++.  Here they are:

 

image

 

For C++, each sample is a project within your solution.  In the case of AngelScript however, each is simply a script file to be run.  Once you’ve built the Engine, you should have a tool named Urho3DPlayer ( or Urho3DPlayer_d if you built for debug ).  This is a command line utility, simply run it and pass in the path to a script to run.  The scripts are located under the Bin folder in the directory /Data/Scripts.

image

 

They are the sample examples as the C++, except of course implemented as AngelScript.

From the command line, in the bin folder, running:

Urho3DPlayer Data\Scripts\11_Physics.as

Will then load and run the script:

image

 

It’s worth noting, I also used the –w switch to run the player in Windowed mode so I could take a screen shot.  Hit ESC to exit.  Oh and Urho3D has annoying behavior of grabbing your mouse cursor, don’t worry when you lose your mouse cursor ( even Windowed ), exit with ESC or alt-tab away and you get your cursor back.  I hate really hate when windowed applications take complete control of my mouse!

 

The code in the samples is well documented, and they cover a wide variety of topics.  This is most likely going to be your primary learning source for getting up to speed quick.

 

To get an idea of a Urho3D application’s structure, let’s take a look at one of the samples, 03_Sprites.  When run, it will do this (except in motion that is):

 

image

 

Now let’s take a look at the corresponding AngelScript and C++ sources.

 

03_Sprites.as

 

// Moving sprites example.
// This sample demonstrates:
//     - Adding Sprite elements to the UI
//     - Storing custom data (sprite velocity) inside UI elements
//     - Handling frame update events in which the sprites are moved

#include "Scripts/Utilities/Sample.as"

// Number of sprites to draw
const uint NUM_SPRITES = 100;

Array<Sprite@> sprites;

void Start()
{
    // Execute the common startup for samples
    SampleStart();

    // Create the sprites to the user interface
    CreateSprites();

    // Hook up to the frame update events
    SubscribeToEvents();
}

void CreateSprites()
{
    // Get rendering window size as floats
    float width = graphics.width;
    float height = graphics.height;

    // Get the Urho3D fish texture
    Texture2D@ decalTex = cache.GetResource("Texture2D", "Textures/UrhoDecal.dds");

    for (uint i = 0; i < NUM_SPRITES; ++i)
    {
        // Create a new sprite, set it to use the texture
        Sprite@ sprite = Sprite();
        sprite.texture = decalTex;

        // The UI root element is as big as the rendering window, set random position within it
        sprite.position = Vector2(Random() * width, Random() * height);

        // Set sprite size & hotspot in its center
        sprite.size = IntVector2(128, 128);
        sprite.hotSpot = IntVector2(64, 64);

        // Set random rotation in degrees and random scale
        sprite.rotation = Random() * 360.0f;
        sprite.SetScale(Random(1.0f) + 0.5f);

        // Set random color and additive blending mode
        sprite.color = Color(Random(0.5f) + 0.5f, Random(0.5f) + 0.5f, Random(0.5f) + 0.5f);
        sprite.blendMode = BLEND_ADD;

        // Add as a child of the root UI element
        ui.root.AddChild(sprite);

        // Store sprite's velocity as a custom variable
        sprite.vars["Velocity"] = Vector2(Random(200.0f) - 100.0f, Random(200.0f) - 100.0f);

        // Store sprites to our own container for easy movement update iteration
        sprites.Push(sprite);
    }
}

void MoveSprites(float timeStep)
{
    float width = graphics.width;
    float height = graphics.height;

    // Go through all sprites
    for (uint i = 0; i < sprites.length; ++i)
    {
        Sprite@ sprite = sprites[i];

        // Rotate
        float newRot = sprite.rotation + timeStep * 30.0f;
        sprite.rotation = newRot;

        // Move, wrap around rendering window edges
        Vector2 newPos = sprite.position + sprite.vars["Velocity"].GetVector2() * timeStep;
        if (newPos.x < 0.0f)
            newPos.x += width;
        if (newPos.x >= width)
            newPos.x -= width;
        if (newPos.y < 0.0f)
            newPos.y += height;
        if (newPos.y >= height)
            newPos.y -= height;
        sprite.position = newPos;
    }
}

void SubscribeToEvents()
{
    // Subscribe HandleUpdate() function for processing update events
    SubscribeToEvent("Update", "HandleUpdate");
}

void HandleUpdate(StringHash eventType, VariantMap& eventData)
{
    // Take the frame time step, which is stored as a float
    float timeStep = eventData["TimeStep"].GetFloat();

    // Move sprites, scale movement with time step
    MoveSprites(timeStep);
}

// Create XML patch instructions for screen joystick layout specific to this sample app
String patchInstructions =
        "<patch>" +
        "    <add sel=\"/element/element[./attribute[@name='Name' and @value='Hat0']]\">" +
        "        <attribute name=\"Is Visible\" value=\"false\" />" +
        "    </add>" +
        "</patch>";

 

And now the C++ versions:

Sprite.h

#pragma once

#include "Sample.h"

/// Moving sprites example.
/// This sample demonstrates:
///     - Adding Sprite elements to the UI
///     - Storing custom data (sprite velocity) inside UI elements
///     - Handling frame update events in which the sprites are moved
class Sprites : public Sample
{
    // Enable type information.
    OBJECT(Sprites);

public:
    /// Construct.
    Sprites(Context* context);

    /// Setup after engine initialization and before running the main loop.
    virtual void Start();

protected:
    /// Return XML patch instructions for screen joystick layout for a specific sample app, if any.
    virtual String GetScreenJoystickPatchString() const { return
        "<patch>"
        "    <add sel=\"/element/element[./attribute[@name='Name' and @value='Hat0']]\">"
        "        <attribute name=\"Is Visible\" value=\"false\" />"
        "    </add>"
        "</patch>";
    }

private:
    /// Construct the sprites.
    void CreateSprites();
    /// Move the sprites using the delta time step given.
    void MoveSprites(float timeStep);
    /// Subscribe to application-wide logic update events.
    void SubscribeToEvents();
    /// Handle the logic update event.
    void HandleUpdate(StringHash eventType, VariantMap& eventData);

    /// Vector to store the sprites for iterating through them.
    Vector<SharedPtr<Sprite> > sprites_;
};

 

Sprite.cpp

#include "CoreEvents.h"
#include "Engine.h"
#include "Graphics.h"
#include "ResourceCache.h"
#include "Sprite.h"
#include "Texture2D.h"
#include "UI.h"

#include "Sprites.h"

#include "DebugNew.h"

// Number of sprites to draw
static const unsigned NUM_SPRITES = 100;

// Custom variable identifier for storing sprite velocity within the UI element
static const StringHash VAR_VELOCITY("Velocity");

DEFINE_APPLICATION_MAIN(Sprites)

Sprites::Sprites(Context* context) :
    Sample(context)
{
}

void Sprites::Start()
{
    // Execute base class startup
    Sample::Start();

    // Create the sprites to the user interface
    CreateSprites();

    // Hook up to the frame update events
    SubscribeToEvents();
}

void Sprites::CreateSprites()
{
    ResourceCache* cache = GetSubsystem<ResourceCache>();
    Graphics* graphics = GetSubsystem<Graphics>();
    UI* ui = GetSubsystem<UI>();

    // Get rendering window size as floats
    float width = (float)graphics->GetWidth();
    float height = (float)graphics->GetHeight();

    // Get the Urho3D fish texture
    Texture2D* decalTex = cache->GetResource<Texture2D>("Textures/UrhoDecal.dds");

    for (unsigned i = 0; i < NUM_SPRITES; ++i)
    {
        // Create a new sprite, set it to use the texture
        SharedPtr<Sprite> sprite(new Sprite(context_));
        sprite->SetTexture(decalTex);

        // The UI root element is as big as the rendering window, set random position within it
        sprite->SetPosition(Vector2(Random() * width, Random() * height));

        // Set sprite size & hotspot in its center
        sprite->SetSize(IntVector2(128, 128));
        sprite->SetHotSpot(IntVector2(64, 64));

        // Set random rotation in degrees and random scale
        sprite->SetRotation(Random() * 360.0f);
        sprite->SetScale(Random(1.0f) + 0.5f);

        // Set random color and additive blending mode
        sprite->SetColor(Color(Random(0.5f) + 0.5f, Random(0.5f) + 0.5f, Random(0.5f) + 0.5f));
        sprite->SetBlendMode(BLEND_ADD);

        // Add as a child of the root UI element
        ui->GetRoot()->AddChild(sprite);

        // Store sprite's velocity as a custom variable
        sprite->SetVar(VAR_VELOCITY, Vector2(Random(200.0f) - 100.0f, Random(200.0f) - 100.0f));

        // Store sprites to our own container for easy movement update iteration
        sprites_.Push(sprite);
    }
}

void Sprites::MoveSprites(float timeStep)
{
    Graphics* graphics = GetSubsystem<Graphics>();
    float width = (float)graphics->GetWidth();
    float height = (float)graphics->GetHeight();

    // Go through all sprites
    for (unsigned i = 0; i < sprites_.Size(); ++i)
    {
        Sprite* sprite = sprites_[i];

        // Rotate
        float newRot = sprite->GetRotation() + timeStep * 30.0f;
        sprite->SetRotation(newRot);
        
        // Move, wrap around rendering window edges
        Vector2 newPos = sprite->GetPosition() + sprite->GetVar(VAR_VELOCITY).GetVector2() * timeStep;
        if (newPos.x_ < 0.0f)
            newPos.x_ += width;
        if (newPos.x_ >= width)
            newPos.x_ -= width;
        if (newPos.y_ < 0.0f)
            newPos.y_ += height;
        if (newPos.y_ >= height)
            newPos.y_ -= height;
        sprite->SetPosition(newPos);
    }
}

void Sprites::SubscribeToEvents()
{
    // Subscribe HandleUpdate() function for processing update events
    SubscribeToEvent(E_UPDATE, HANDLER(Sprites, HandleUpdate));
}

void Sprites::HandleUpdate(StringHash eventType, VariantMap& eventData)
{
    using namespace Update;

    // Take the frame time step, which is stored as a float
    float timeStep = eventData[P_TIMESTEP].GetFloat();
    
    // Move sprites, scale movement with time step
    MoveSprites(timeStep);
}

 

EDIT: And the Lua example as well:

03_Sprites.lua

-- Moving sprites example.
-- This sample demonstrates:
--     - Adding Sprite elements to the UI
--     - Storing custom data (sprite velocity) inside UI elements
--     - Handling frame update events in which the sprites are moved

require "LuaScripts/Utilities/Sample"

local numSprites = 100
local sprites = {}

-- Custom variable identifier for storing sprite velocity within the UI element
local VAR_VELOCITY = StringHash("Velocity")

function Start()
    -- Execute the common startup for samples
    SampleStart()

    -- Create the sprites to the user interface
    CreateSprites()

    -- Hook up to the frame update events
    SubscribeToEvents()
end

function CreateSprites()
    local decalTex = cache:GetResource("Texture2D", "Textures/UrhoDecal.dds")

    local width = graphics.width
    local height = graphics.height

    for i = 1, numSprites do
        -- Create a new sprite, set it to use the texture
        local sprite = Sprite:new()
        sprite.texture = decalTex
        sprite:SetFullImageRect()

        -- The UI root element is as big as the rendering window, set random position within it
        sprite.position = Vector2(Random(width), Random(height))

        -- Set sprite size & hotspot in its center
        sprite:SetSize(128, 128)
        sprite.hotSpot = IntVector2(64, 64)

        -- Set random rotation in degrees and random scale
        sprite.rotation = Random(360.0)
        sprite.scale = Vector2(1.0, 1.0) * (Random(1.0) + 0.5)

        -- Set random color and additive blending mode
        sprite:SetColor(Color(Random(0.5) + 0.5, Random(0.5) + 0.5, Random(0.5) + 0.5, 1.0))
        sprite.blendMode = BLEND_ADD

        -- Add as a child of the root UI element
        ui.root:AddChild(sprite)

        -- Store sprite's velocity as a custom variable
        sprite:SetVar(VAR_VELOCITY, Variant(Vector2(Random(200.0) - 100.0, Random(200.0) - 100.0)))

        table.insert(sprites, sprite)
    end
end

function SubscribeToEvents()
    -- Subscribe HandleUpdate() function for processing update events
    SubscribeToEvent("Update", "HandleUpdate")
end

function MoveSprites(timeStep)
    local width = graphics.width
    local height = graphics.height

    for i = 1, numSprites do
        local sprite = sprites[i]
        sprite.rotation = sprite.rotation + timeStep * 30

        local newPos = sprite.position
        newPos = newPos + sprite:GetVar(VAR_VELOCITY):GetVector2() * timeStep

        if newPos.x >= width then
            newPos.x = newPos.x - width
        elseif newPos.x < 0 then
            newPos.x = newPos.x + width
        end
        if newPos.y >= height then
            newPos.y = newPos.y - height
        elseif newPos.y < 0 then
            newPos.y = newPos.y + height
        end
        sprite.position = newPos
    end
end

function HandleUpdate(eventType, eventData)
    local timeStep = eventData:GetFloat("TimeStep")

    MoveSprites(timeStep)
end

-- Create XML patch instructions for screen joystick layout specific to this sample app
function GetScreenJoystickPatchString()
    return
        "<patch>" ..
        "    <add sel=\"/element/element[./attribute[@name='Name' and @value='Hat0']]\">" ..
        "        <attribute name=\"Is Visible\" value=\"false\" />" ..
        "    </add>" ..
        "</patch>"
end

 

As you can see, the code is clean enough and well enough documented to learn from. Unfortunately there aren't equivalent Lua examples right now.

EDIT: Ok, my bad.  Fortunately there are in fact Lua examples as well!  They were just very well hidden in the /Bin/Data/LuaScript folder.

 

Hello World

 

Urho3D commits a common sin and one that drives me absolutely nuts with game engines.  It’s Hello World, in fact, all of it’s C++ examples are built over a “Sample” class.  This means when the reader wants to start from scratch on their own project, they have to tear through the base class to figure out what goes into a core application.  I get why they do this, so they can focus on the feature they want to show, but at least one example should be as complete as possible with no underlying class to build on.  Fortunately I have done this for you.  The following is basically the “minimum usable” Urho3D application:

 

TestMain.h

#pragma once

#include "Application.h"


using namespace Urho3D;

class TestMain : public Urho3D::Application {
   OBJECT(TestMain);

public:
   TestMain(Urho3D::Context*);

   virtual void Setup();
   virtual void Start();
   virtual void Stop() {}

private:
   void onKeyDown(StringHash,  VariantMap&);

};

 

TestMain.cpp

#include "TestMain.h"
#include "Engine.h"
#include "Graphics.h"
#include "Input.h"
#include "InputEvents.h"
#include "ResourceCache.h"
#include "UI.h"
#include "Font.h"
#include "Text.h"

using namespace Urho3D;

DEFINE_APPLICATION_MAIN(TestMain)

TestMain::TestMain(Urho3D::Context* context) : Application(context){
}

void TestMain::Setup(){
   engineParameters_["FullScreen"] = false;
}

void TestMain::Start(){
   SubscribeToEvent(E_KEYDOWN, HANDLER(TestMain,onKeyDown));

   SharedPtr<Text> text(new Text(context_));
   text->SetText("Hello Cruel World!");
   text->SetColor(Color::WHITE);
   text->SetFont(GetSubsystem<ResourceCache>()->GetResource<Font>("Fonts/BlueHighway.ttf"), 42);
   text->SetHorizontalAlignment(HA_CENTER);
   text->SetVerticalAlignment(VA_CENTER);

   GetSubsystem<UI>()->GetRoot()->AddChild(text);
}

void TestMain::onKeyDown(StringHash event, VariantMap& data){
   engine_->Exit();
}

 

It creates a windowed Hello World application, displays the text “Hello Cruel World” in white, centered to the screen and waits for any key to be pressed before exiting.

While basic, it should give you some idea how Urho3D works.  There are times, like when trying to figure out parameters engineParameters takes that you really wish for better reference documentation, but it was fairly simple to get things up and running.  I did have a bit of a struggle with life cycle, when I tried to put more logic into Setup() instead of Start() but otherwise things mostly worked how I expected.   Speaking of documentation…

 

The Documentation

 

So what’s the documentation like?  It’s split in to two parts, the documentation that has been written covering the various aspects, tasks and systems in Urho3D.  There is also an auto generated class reference. You can read the documentation here.

 

As you can see, most of the major systems are covered:

image

 

The documentation is well written in clear English, and for the most part covers what you would expect it to.  For an open source project I have to say, the overall documentation level is very good.  The only area I was somewhat let down by was the reference material.

 

There is an automatically generated class reference available:

image

But the details are pretty sparse:

image

 

So, for example, if you are hunting down say… what audio formats are supported, this information can be a bit hard to find and may result in you having to jump into the source code.  I do wish there was more implementation specific details in the reference materials.

 

Perhaps I am nit picking at this point…  working so much in Java lately, JavaDoc has really spoiled me.

 

In summary, the documentation is solid, great in fact for an open source project.  I would however appreciate more detail in the reference material.

 

Tools

 

Part of what makes an engine and engine is the tooling it supports.  Urho3D is no exception.  We already mentioned Urho3DPlayer, but there are several other tools, one of which is actually run in the player.  There is a full blown 3D level editor:

image

 

The editor enables you to create and edit several node types:

image

 

And provides context appropriate property editing:

image

 

It’s not going to win any beauty pageants, but it is a full functioning 3D world editor written entirely in AngelScript.  So if it doesn’t have functionality you want, simply add it.  The code is all available in the Bin\Data\Scripts\Editor folder:

image

 

With full code access, you should easily be able to extend the editor to fit whatever type of game you are looking at creating.

 

In addition to the editor, there are a number of other tools.  There is AssetImporter from the Assimp project, for importing 3D assets.  There is also a tool for importing Ogre3D assets.  PackageTool, for pulling your assets all together, a shader compiler, lightmap generator and more.

 

Summary

 

Urho3D is an impressive, well documented, cross platform game engine with clean accessible code and a ton of examples.  There are of course some negatives too.  The tools aren’t nearly as polished as you see in many commercial engines, the reference material could be a bit more extensive and the community isn’t huge.  I can’t speak to performance as I never dove in that deeply.  Is it worth checking out for your own game project?  Well, if control and open source are important to you and you like C++, AngelScript and/or Lua, I would most certainly give it a look. 

 

What do you think, does Urdo3D look interesting to you?  Would you like to see more in depth tutorials from GameFromScratch.com?  Let me know!

Programming


11. October 2014

 

 

So at this point in time we’ve covered configuring Cocos2d-x, basic graphics, mouse, touch and keyboard event handling but wouldn’t it be nice to, you know… do something?  Most games are pretty boring if they are completely static, no?  Well in this tutorial section we are going to make things a bit more interesting.  One of the ways we are going to add a bit of life to our game is using Actions, which we will cover in a second.  First we need to cover something else, the game loop.

 

Handling Updates

 

Pretty much every game ever made has a game loop, even if it’s hidden by the game engine.  A Cocos2d-x game is no exception, although it might not be immediately obvious.

What's a game loop?


A game loop is essentially the heart of a game, what causes the game to actually run. The following is a fairly typical game loop:

void gameLoop(){
   while (game != DONE){
      getInput();
      physicsEngine.stepForward();
      updateWorld();
      render();
   }
   cleanup();
}

 

As you can see, it's quite literally a loop that calls the various functions that make your game a game.  This is obviously a rather primitive example but really 90% of game loops end up looking very similar to this.

 

However, once you are using a game engine, things get slightly different.  All this stuff still happens, it’s just no longer your codes responsibility to handle it anymore.  Instead the game engine performs the loop and each step then calls back to your game code.  Consider when your game handles input events, where do those events come from?  Well chances are the game engine has a getInput() somewhere inside it, and as part of that process calls your event handlers.  Even though you don’t have to handle the games lifecycle yourself, it’s helpful to understand what’s going on behind the scenes.

 

So far in all of our examples we either handled everything in init() or in response to input event callbacks and that can only get you so far.  What happens when you want to update your game independently to input events?  One option is to update your game every time you render a frame of graphics, but this is generally not a great idea.  It’s very common to try to run graphics as fast as possible but update the game at a fixed frequency.  Plus, logically, does it really make sense to be updating stuff during a function that’s responsible for drawing graphics?  No, not really.

 

Fortunately there is a ready and much better named alternative… you guessed it, update.  The method update is part of the Node class and is easily overridden.  Let’s take a quick look at a game that handles update.  I got so sick of recreating scenes each time I created a new project, so the name might look somewhat familiar.

Also you are going to need a sprite for this sample.  Personally I am using a picture of my car… yeah, that’s it.

 

 

Veyron

 

Feel free to use whatever you want.  Now the code:

 

HelloWorld.h

#pragma once

#include "cocos2d.h"

class HelloWorld : public cocos2d::Layer
{
public:
    static cocos2d::Scene* createScene();
    virtual bool init() override;
    CREATE_FUNC(HelloWorld);

    void update(float) override;

private:
   cocos2d::Sprite* sprite;
};

 

HelloWorld.cpp

#include "HelloWorldScene.h"

USING_NS_CC;

Scene* HelloWorld::createScene()
{
    auto scene = Scene::create();
    auto layer = HelloWorld::create();
    scene->addChild(layer);
    return scene;
}

bool HelloWorld::init()
{
    if ( !Layer::init() )
    {
        return false;
    }
    
    sprite = Sprite::create("Veyron.png");
    sprite->setPosition(this->getBoundingBox().getMidX(), this->getBoundingBox().getMidY());
    this->addChild(sprite, 0);
    
    this->scheduleUpdate();
    return true;
}

void HelloWorld::update(float delta){
   auto position = sprite->getPosition();
   position.x -= 250 * delta;
   if (position.x  < 0 - (sprite->getBoundingBox().size.width / 2))
      position.x = this->getBoundingBox().getMaxX() + sprite->getBoundingBox().size.width/2;
   sprite->setPosition(position);
}

 

Now, if you run the code you get:

 

action1

 

So what’s going on here?  Well think back to that game loop example I gave earlier.  Now imagine somewhere deep inside Cocos2d-x when it performs the “updateWorld” portion, that it loops through all the the Nodes in the game and calls their update() method.  Well that’s basically exactly what happens.  The line:

 

     this->scheduleUpdate();

 

Is what tells Cocos2d-x to call the Node's update function.  We then override update to implement our logic.  The sole paramater passed to update is a float value representing the amount of time, in seconds since the last time the update function was called.  Therefore if it’s been 1/10 of a second since the last time update was called, the value passed in will be 0.1.

 

Inside the update itself, we simply change the position of our sprite until it is fully off screen on the left hand side.  At which point we move it to the right hand side and repeat the process.  The only code that is of interest here is this line:

position.x -= 250 * delta;

 

This is a pretty common technique in game dev for creating smooth animations.  What we are saying here is we want to move by 250 pixels to the left.  The problem is, we have no idea how fast our update is going to be called, so on a faster computer the car will move faster and on a slower computer it will move slower.  This is obviously not ideal.  Enter the delta value.  Since we know how long it was since the last frame, we know if we multiply our move amount by the fraction of a second each frame takes, it will perform roughly the same speed on all computers.  So, using the 0.10 value above, this means we are running 10 updates per second, so each time we will be updating by 250 * 0.10 or 25, literally a 10th of the amount we want to update.  If however this value is over one second, things will get screwy.  That said, if your game is running at less than 1FPS, you’ve got bigger problems to worry about!  So, in a nutshell, when moving on a frame by frame basis, express your units in seconds, then multiply them by the delta passed in to the update function.

 

Now remember earlier when I said it’s possible to run your updates in the render method but it wasn’t always ideal, how then do we control the frequency that our update is called?

 

Well, we can’t really as you never know how fast the computer or phone you are going to be running is.  You do however have control over the priority the updater will view your update function with.  By default when you call scheduleUpdate() your update function will be called every single frame.  If the node you are updating doesn’t actually need to be updated every frame, you are just wasting CPU power ( and battery life ).  If you have a lower priority update you can tell Cocos2d-x this using:

this->scheduleUpdateWithPriority(42);

 

The actual value passed in is simply relative to other priorities.  When Cocos is trying to decide which update’s to call, it will first call all of the update() that don’t have a priority set.  Then it will call the one with the lowest value, then the next highest, etc.  So if you have three Node with update set, one with no priority set, one with a priority of 42 and one with a priority of 13, the no priority update will be called first, then the 13 and finally the 42.  In some ways you aren’t actually setting the priority, you are setting the lack of priority! 

 

In place of overriding update() you can also use schedule and scheduleOnce to schedule any function to be called.  Either after a period of time or a number of times.  The called function needs to have the same profile as update, that is takes a single float parameter and a void return type.

 

Sometimes however instead of reacting each frame and updating your world, you just want to “fire and forget” something.  For example let’s say you want to move an object to a certain location over a certain period of time.  This is where Actions come in.

 

Using Actions

 

As just mentioned, Actions allow you to set something in motion and forget about it.  Actions are remarkably consistent in how they work, so I will only show small snippets of code for each one.  We are using the following code as our base:

 

#include "HelloWorldScene.h"

cocos2d::Scene* HelloWorld::createScene()
{
    auto scene = cocos2d::Scene::create();
    auto layer = HelloWorld::create();
    scene->addChild(layer);
    return scene;
}

bool HelloWorld::init()
{
    if ( !Layer::init() )
    {
        return false;
    }
    
    sprite = cocos2d::Sprite::create("Veyron.png");
    sprite->setPosition(this->getBoundingBox().getMidX(), this->getBoundingBox().getMidY());
    this->addChild(sprite, 0);
    
    auto listener = cocos2d::EventListenerKeyboard::create();
    listener->onKeyPressed = [=](cocos2d:: EventKeyboard::KeyCode code, cocos2d::Event * event)->void{
      // This is where our different actions are going to be implemented
      auto action = cocos2d::MoveTo::create(2, cocos2d::Vec2(0, 0));
      sprite->runAction(action);
   };

    this->_eventDispatcher->addEventListenerWithSceneGraphPriority(listener,this);
        return true;
}

 

This is also our first example of using an Action. In this case we are using the MoveTo action to move the target node to the position (0,0) over a duration of 2 seconds. You run the action on a Node using the runAction method.  Run it and press any key and you will see:

 

MoveTo

 

There are several similar actions, let’s take a look at a couple of them now.  Instead of MoveTo, there is also MoveBy, which enables you to move your node relative to it’s current position, like so:

 

   auto listener = cocos2d::EventListenerKeyboard::create();
   listener->onKeyPressed = [=](cocos2d:: EventKeyboard::KeyCode code, cocos2d::Event * event)->void{
      auto action = cocos2d::MoveBy::create(2, cocos2d::Vec2(300, 300));
      sprite->runAction(action);
   };

 

When you run this, instead of moving to a destination over a period of 2 seconds, we instead move by 300 right and 300 up over the same time period.

 

MoveBy

 

There are several similar Actions that can be used to transform and modify a Node such as RotateBy, RotateTo, ScaleTo, SkewTo, TintTo, TintBy and more.

 

In addition to transforming nodes, you can actually loop and sequence actions, to make combo’s.  Let’s take a look at an example of a sequence of several actions.  In this example we are going to perform a ScaleBy, TintTo then FadeTo back to back using the Sequence action.

 

   listener->onKeyPressed = [=](cocos2d:: EventKeyboard::KeyCode code, cocos2d::Event * event)->void{
      cocos2d::Vector<cocos2d::FiniteTimeAction*> actions;
      actions.pushBack(cocos2d::ScaleBy::create(1.5, 1.5));
      actions.pushBack(cocos2d::TintTo::create(1.5, 255, 0, 0));
      actions.pushBack(cocos2d::FadeTo::create(1.5, 30));
      
      auto sequence = cocos2d::Sequence::create(actions);

      sprite->runAction(sequence);
   };

 

And when run:

Sequence

 

There are two things to be aware of from this example. First you may notice that TintTo takes three GLubyte values to repesent the red, green and blue values of the colour, while FadeTo takes a single GLubyte value to represent that alpha or transparency.  A GLubyte is an 8bit value that ranges from 0 to 255 in value.  In all cases 255 is the fully on value, and 0 is the fully off value.  Therefore the value (255,0,0) is 100% red, 0% green, 0% blue, while the value 30 in TintTo is 30/255 or 11.7% opaque.  The second import thing to note is the use of Vector.  This is a cocos2d value type, NOT a std::vector, although ultimately behind the scenes, I believe it is still implemented using a std::vector.  This means you cant use it as a std::vector, nor can you use a std::vector where a cocos2d::Vector is expected.  This also unfortunately means you can’t use initializer lists.

 

So, that’s how you can perform a number of actions in sequence, what happens if you want to perform them all at once?  You can do that too using Spawn, which personally I think could really have a better name!  Let’s look at exactly the same example using Spawn instead.  The only difference is I increased the duration of each action to 4 seconds, mostly just to make it easier to screen capture. :)

 

   listener->onKeyPressed = [=](cocos2d:: EventKeyboard::KeyCode code, cocos2d::Event * event)->void{
      cocos2d::Vector<cocos2d::FiniteTimeAction*> actions;
      actions.pushBack(cocos2d::ScaleBy::create(4, 1.5));
      actions.pushBack(cocos2d::TintTo::create(4, 255, 0, 0));
      actions.pushBack(cocos2d::FadeTo::create(4, 30));
      
      auto parallel = cocos2d::Spawn::create(actions);

      sprite->runAction(parallel);
   };

 

And run it:

parallel

 

You also have the ability to repeat actions, both a certain number of times, or simply forever.  That is exactly what this example is going to do.  The first action moves to the right by 10 pixels every 0.2 of a second.  The second action scales the sprite up 30% every 2 seconds.  The first action will be repeated 10 times, the second forever, or until it crashes your computer that is. :)

 

   auto listener = cocos2d::EventListenerKeyboard::create();
   listener->onKeyPressed = [=](cocos2d:: EventKeyboard::KeyCode code, cocos2d::Event * event)->void{
      auto action = cocos2d::MoveBy::create(0.2, cocos2d::Vec2(10, 0));
      auto action2 = cocos2d::ScaleBy::create(2, 1.3);
      auto repeat = cocos2d::Repeat::create(action, 10);
      auto repeatForever = cocos2d::RepeatForever::create(action2);

      sprite->runAction(repeat);
      sprite->runAction(repeatForever);
   };

 

Running:

repeat

 

So far we’ve only looked at Actions inherited from ActionInterval, which are actions that happen over time.  There are also actions that happen instantly, let’s take a look at some of them now.  These actions inherit from ActionInstant.  In this example we illustrate several instant actions ( as well as a MoveTo, DelayTime and Sequence, as a bunch of instant actions doesn’t make for a great demonstration! )

 

   auto listener = cocos2d::EventListenerKeyboard::create();
   listener->onKeyPressed = [=](cocos2d:: EventKeyboard::KeyCode code, cocos2d::Event * event)->void{
      cocos2d::Vector<cocos2d::FiniteTimeAction*> actions;
      actions.pushBack(cocos2d::MoveTo::create(1, cocos2d::Vec2(0, 0)));
      actions.pushBack(cocos2d::DelayTime::create(1));
      actions.pushBack(cocos2d::Place::create(cocos2d::Vec2(
         this->getBoundingBox().getMidX(), this->getBoundingBox().getMidY())));
      actions.pushBack(cocos2d::DelayTime::create(1));
      actions.pushBack(cocos2d::FlipX::create(true));
      actions.pushBack(cocos2d::DelayTime::create(1));
      actions.pushBack(cocos2d::FlipY::create(true));
      actions.pushBack(cocos2d::DelayTime::create(1));
      actions.pushBack(cocos2d::Hide::create());
      actions.pushBack(cocos2d::DelayTime::create(1));
      actions.pushBack(cocos2d::Show::create());
      actions.pushBack(cocos2d::DelayTime::create(1));

      actions.pushBack(cocos2d::CallFunc::create([=]()->void{
         this->setColor(cocos2d::Color3B::RED);
      }));

      actions.pushBack(cocos2d::DelayTime::create(1));
      actions.pushBack(cocos2d::RemoveSelf::create(false));

      auto sequence = cocos2d::Sequence::create(actions);
      sprite->runAction(sequence);
   };

 

This code running:

Instant

 

As you can see, instant actions work almost indentically.  FlipX mirrors the Node along the X axis, FlipY does the same across the Y axis.  DelayTime we havent used yet, does exactly what it’s name says, delays for the given amount of seconds before executing the next Action.  The Place action can by thought of as a 0 duration MoveTo call, putting the Node at the specified position.

 

CallFunc and RemoveSelf are the two actions that probably require the most explanation.  CallFunc enables you to call code using an action, in this case I use a lambda that simply changes the background color of the Layer.  CallFunc is an incredibly important action and allows you to do just about anything using Actions, such as updating state, playing a sound, etc.  RemoveSelf is another handy action… it’s basically a kill switch.  When a removeSelf action is encountered, that Node is removed from it’s parent.  Passing true causing cleanup to be done.  This is incredibly handy for something like handling the lifespan of a bullet in the scene for example.

 

Setting a Layer's Background Color


You may have noticed I changed the background of the scene in the previous example using a call to setColor(). However if you try to run this code as is, you will notice it doesn't actually work. This is because, behind the scenes, I made a couple small changes. Instead of our scene inheriting from Layer we instead inherit from LayerColor, which adds, you guessed it, color information. Additionally, install of calling Layer::init() in our own init, we call LayerColor::initWithColor(). With these two changes you can now set the background color in the layer.

 


Odds and Ends

 

There are a few interesting topics that fit into this chapter but we didn’t cover yet, so I am going to shoehorn them here at the end.  One very common activity developer’s want to perform when working with Actions is to pause them.  As you can have several Actions running at once, so then, what do you do when you want to pause your game?  Thankfully it’s quite simple to accomplish using ActionManager.

 

HelloWorldScene.h

#pragma once

#include "cocos2d.h"

class HelloWorld : public cocos2d::LayerColor
{
public:
    static cocos2d::Scene* createScene();
    virtual bool init() override;
    CREATE_FUNC(HelloWorld);

private:
   cocos2d::Sprite* sprite,*sprite2;
   cocos2d::Label* label;
   bool spritePaused = false;
   cocos2d::Vector<Node*> pausedNodes;
};

 

HelloWorldScene.cpp

#include "HelloWorldScene.h"

cocos2d::Scene* HelloWorld::createScene()
{
    auto scene = cocos2d::Scene::create();
    auto layer = HelloWorld::create();
    scene->addChild(layer);
    return scene;
}

bool HelloWorld::init()
{
   if (!LayerColor::initWithColor(cocos2d::Color4B::BLACK))
    {
        return false;
    }
    
   label = cocos2d::Label::createWithSystemFont("Press space to pause all, 1 to pause left", "Arial", 30);
   label->setPosition(cocos2d::Vec2(this->getBoundingBox().getMidX(), this->getBoundingBox().getMaxY() - 20));

   sprite = cocos2d::Sprite::create("Veyron.png");
   sprite2 = cocos2d::Sprite::create("Veyron.png");
   sprite->setPosition(250, this->getBoundingBox().getMidY());
   sprite2->setPosition(700, this->getBoundingBox().getMidY());

   auto rotate = cocos2d::RotateBy::create(1, 45);
   auto rotate2 = cocos2d::RotateBy::create(1, -45);

   auto repeat1 = cocos2d::RepeatForever::create(rotate);
   auto repeat2 = cocos2d::RepeatForever::create(rotate2);

   this->addChild(label,0);
   this->addChild(sprite, 0);
   this->addChild(sprite2, 0);
    
   sprite->runAction(repeat1);
   sprite2->runAction(repeat2);
   auto listener = cocos2d::EventListenerKeyboard::create();
   listener->onKeyPressed = [=](cocos2d::EventKeyboard::KeyCode code, cocos2d::Event * event)->void{
      // On Spacebar, Pause/Unpause all actions and updates
      if (code == cocos2d::EventKeyboard::KeyCode::KEY_SPACE){
         if (pausedNodes.size()){
            cocos2d::Director::getInstance()->getActionManager()->resumeTargets(pausedNodes);
            pausedNodes.clear();
            spritePaused = false; // In case user currently has 1 pressed too
         }
         else
            pausedNodes = cocos2d::Director::getInstance()->getActionManager()->pauseAllRunningActions();
         label->setString("Spacebar pressed");
      }
      // Pause/UnPause just sprite 1
      if (code == cocos2d::EventKeyboard::KeyCode::KEY_1){
         if (spritePaused)
            sprite->resumeSchedulerAndActions();
         else
            sprite->pauseSchedulerAndActions();
         spritePaused = !spritePaused;
         label->setString("1 pressed");
      }
      
   };

   this->_eventDispatcher->addEventListenerWithSceneGraphPriority(listener,this);
   return true;
}

 

And run it:

ActionManager

 

As you can see using ActionManager you are able to pause execution of Actions, either to a single Node or all Nodes at once.  In the event of a single Node it’s simply a matter of calling pauseSchedulerAndActions and resumeSchedulerAndActions.  You can also call pause() which is also result in the Node no longer receiving events too.

 

In the event of pausing all running actions by calling getActionManager()->pauseAllRunningActions() this returns a cocos2d::Vector off all the Nodes that were paused.  When resuming, you simply pass this Vector back in a call to resumeTargets().

 

Earlier on we called scheduleUpdate() with resulted in our update method being called every frame.  However you can also schedule any kind of function using the scheduler.  Let’s take a look:

 

#include "HelloWorldScene.h"

cocos2d::Scene* HelloWorld::createScene()
{
    auto scene = cocos2d::Scene::create();
    auto layer = HelloWorld::create();
    scene->addChild(layer);
    return scene;
}


void HelloWorld::callOnce(float delta){
   cocos2d::MessageBox("Called after 10 seconds elapsed", "Message");
}

bool HelloWorld::init()
{
   if (!LayerColor::initWithColor(cocos2d::Color4B::BLACK))
    {
        return false;
    }
   
   this->scheduleOnce(schedule_selector(HelloWorld::callOnce), 10);
    return true;
}

 

This code will wait 10 seconds and then call our method callOnce().

 

So, even though the event loop is hidden away in a Cocos2d-x, there are plenty of ways you can control the action, be it using updates, scheduling functions to run or using Actions.

 

Programming


7. October 2014

 

 

In this part of the Cocos2d-x tutorial series we are going to take a look at what’s involved in handling keyboard events.  If you went through the mouse/touch tutorial, a lot of this is going to seem very familiar, as the process is quite similar.  That said, keyboard handling does have it’s own special set of problems to deal with.

Let's jump straight in to an example. Once again I assume you already know how create your own AppDelegate, if you can't, I suggest you jump back to this part first.

 

Handling Keyboard Events

 

Our first example is simply going to respond to WASD and Arrow keys to move the Cocos2d-x logo around the screen.  In this example I made no special modifications to a standard scene, so the header is unchanged from previous tutorials.

 

KeyboardScene.cpp

#include "KeyboardScene.h"

USING_NS_CC;

Scene* KeyboardScene::createScene()
{
    auto scene = Scene::create();
    
    auto layer = KeyboardScene::create();
    scene->addChild(layer);
    return scene;
}

bool KeyboardScene::init()
{
    if ( !Layer::init() )
    {
        return false;
    }
    
    auto sprite = Sprite::create("HelloWorld.png");
    sprite->setPosition(this->getContentSize().width/2, this->getContentSize().height/2);

    this->addChild(sprite, 0);

    auto eventListener = EventListenerKeyboard::create();



    eventListener->onKeyPressed = [](EventKeyboard::KeyCode keyCode, Event* event){

        Vec2 loc = event->getCurrentTarget()->getPosition();
        switch(keyCode){
            case EventKeyboard::KeyCode::KEY_LEFT_ARROW:
            case EventKeyboard::KeyCode::KEY_A:
                event->getCurrentTarget()->setPosition(--loc.x,loc.y);
                break;
            case EventKeyboard::KeyCode::KEY_RIGHT_ARROW:
            case EventKeyboard::KeyCode::KEY_D:
                event->getCurrentTarget()->setPosition(++loc.x,loc.y);
                break;
            case EventKeyboard::KeyCode::KEY_UP_ARROW:
            case EventKeyboard::KeyCode::KEY_W:
                event->getCurrentTarget()->setPosition(loc.x,++loc.y);
                break;
            case EventKeyboard::KeyCode::KEY_DOWN_ARROW:
            case EventKeyboard::KeyCode::KEY_S:
                event->getCurrentTarget()->setPosition(loc.x,--loc.y);
                break;
        }
    };

    this->_eventDispatcher->addEventListenerWithSceneGraphPriority(eventListener,sprite);

    return true;
}

 

When run, you see the logo centered and can move it around using either WASD or arrow keys.

KeyboardSS

The code works almost identically to our earlier Touch examples.  You create an EventListener, in this case a EventListenerKeyboard, implement the onKeyPressed event handler.  The first paramater passed in is the EventKeyboard::KeyCode enum, which is a value representing the key that was pressed.  The second value was the Event target, in this case our sprite.  We use the Event pointer to get the target Node and update it’s position in a direction depending on which key is pressed.  Finally we wire up our scene’s _eventDispatcher to receive events.  Nothing really unexpected here.

 

Polling the Keyboard

 

You may however ask yourself… what If I want to poll for keyboard events?  For example, what if you wanted to check to see if the spacebar was pressed at any given time?

 

Short answer is, you can’t.  Cocos2d-x is entirely event driven.

 

Long answer however is, it’s relatively easy to roll your own solution, so let’s do that now.  I’ll jump right in with the code and discuss it after.

 

KeyboardScene.h

#pragma once

#include "cocos2d.h"
#include <map>


class KeyboardScene : public cocos2d::Layer
{
public:

    static cocos2d::Scene* createScene();
    virtual bool init();

    bool isKeyPressed(cocos2d::EventKeyboard::KeyCode);
    double keyPressedDuration(cocos2d::EventKeyboard::KeyCode);

    CREATE_FUNC(KeyboardScene);

private:
    static std::map<cocos2d::EventKeyboard::KeyCode,
        std::chrono::high_resolution_clock::time_point> keys;
    cocos2d::Label * label;
public:
    virtual void update(float delta) override;
};

 

KeyboardScene.cpp

#include "KeyboardScene.h"

USING_NS_CC;

Scene* KeyboardScene::createScene()
{
    auto scene = Scene::create();
    
    KeyboardScene* layer = KeyboardScene::create();
    scene->addChild(layer);
    return scene;
}

bool KeyboardScene::init()
{
    if ( !Layer::init() )
    {
        return false;
    }

    label = cocos2d::Label::createWithSystemFont("Press the CTRL Key","Arial",32);
    label->setPosition(this->getBoundingBox().getMidX(),this->getBoundingBox().getMidY());
    addChild(label);
    auto eventListener = EventListenerKeyboard::create();



    Director::getInstance()->getOpenGLView()->setIMEKeyboardState(true);
    eventListener->onKeyPressed = [=](EventKeyboard::KeyCode keyCode, Event* event){
        // If a key already exists, do nothing as it will already have a time stamp
        // Otherwise, set's the timestamp to now
        if(keys.find(keyCode) == keys.end()){
            keys[keyCode] = std::chrono::high_resolution_clock::now();
        }
    };
    eventListener->onKeyReleased = [=](EventKeyboard::KeyCode keyCode, Event* event){
        // remove the key.  std::map.erase() doesn't care if the key doesnt exist
        keys.erase(keyCode);
    };

    this->_eventDispatcher->addEventListenerWithSceneGraphPriority(eventListener,this);

    // Let cocos know we have an update function to be called.
    // No worries, ill cover this in more detail later on
    this->scheduleUpdate();
    return true;
}

bool KeyboardScene::isKeyPressed(EventKeyboard::KeyCode code) {
    // Check if the key is currently pressed by seeing it it's in the std::map keys
    // In retrospect, keys is a terrible name for a key/value paried datatype isnt it?
    if(keys.find(code) != keys.end())
        return true;
    return false;
}

double KeyboardScene::keyPressedDuration(EventKeyboard::KeyCode code) {
    if(!isKeyPressed(EventKeyboard::KeyCode::KEY_CTRL))
        return 0;  // Not pressed, so no duration obviously

    // Return the amount of time that has elapsed between now and when the user
    // first started holding down the key in milliseconds
    // Obviously the start time is the value we hold in our std::map keys
    return std::chrono::duration_cast<std::chrono::milliseconds>
            (std::chrono::high_resolution_clock::now() - keys[code]).count();
}

void KeyboardScene::update(float delta) {
    // Register an update function that checks to see if the CTRL key is pressed
    // and if it is displays how long, otherwise tell the user to press it
    Node::update(delta);
    if(isKeyPressed(EventKeyboard::KeyCode::KEY_CTRL)) {
        std::stringstream ss;
        ss << "Control key has been pressed for " << 
            keyPressedDuration(EventKeyboard::KeyCode::KEY_CTRL) << " ms";
        label->setString(ss.str().c_str());
    }
    else
        label->setString("Press the CTRL Key");
}
// Because cocos2d-x requres createScene to be static, we need to make other non-pointer members static
std::map<cocos2d::EventKeyboard::KeyCode,
        std::chrono::high_resolution_clock::time_point> KeyboardScene::keys;

 

And when you run it:

ControlKey

 

So, what are we doing here?  Well essentially we record key events as they come in.  We have two events to work with, onKeyPressed and onKeyReleased.  When a key is pressed, we store it in a std::map, using the KeyCode as the key and the current time as the value.  When the key is released, we remove the released key from the map.  Therefore at any given time, we know which keys are pressed and for how long.  In this particular example, in the update() function ( ignore that for now, I’ll get into it later! ) we poll to see if the Control key is pressed.  If it is, we find out for how long and display a string.

 

So, even though polling isn’t built in to Cocos2d-x, it is relatively easy to add.

 

Dealing with Keyboards on Mobile Devices

 

So, what about keyboards on mobile devices?  All Android phones and iOS devices are able to display a Soft Keyboard ( the onscreen keyboard ), can we use it?  The answer is… sort of.

 

What's about physical keyboards on mobile devices?


You may be wondering, how does a physical keyboard on a mobile device work with Cocos2d-x? In the case of an iPad, the answer is, it doesn't. When I hooked up a Bluetooth Keyboard, absolutely nothing happened. The same occurred when I paired the keyboard to my Android phone. However, I do not have an Android device with a physical keyboard, such as the Asus Transformer, but my gut says it wouldn't work either. At least, not with you doing a lot of legwork that is

 

Sort of isn't really a great answer so I will go into a bit more detail.  Yes you can use the soft keyboard, but in a very limited manner.  Basically you can use it for text entry only.  Truth is though, this should be enough, as controlling a game using a soft keyboard would be a horrid experience.

 

Let’s take a look at an example using TextFieldTTF and implementing an TextFieldDelegate:

 

KeyTabletScene.h

#pragma once
#include "cocos2d.h"

class KeyTabletScene : public cocos2d::Layer, public cocos2d::TextFieldDelegate
{
public:
    virtual ~KeyTabletScene();

    virtual bool onTextFieldAttachWithIME(cocos2d::TextFieldTTF *sender) override;

    virtual bool onTextFieldDetachWithIME(cocos2d::TextFieldTTF *sender) override;

    virtual bool onTextFieldInsertText(cocos2d::TextFieldTTF *sender, const char *text, size_t nLen) override;

    virtual bool onTextFieldDeleteBackward(cocos2d::TextFieldTTF *sender, const char *delText, size_t nLen) 
override; virtual bool onVisit(cocos2d::TextFieldTTF *sender, cocos2d::Renderer *renderer, cocos2d::Mat4 const &transform, uint32_t flags) override; static cocos2d::Scene* createScene(); virtual bool init(); CREATE_FUNC(KeyTabletScene); };

 

KeyTabletScene.cpp

#include "KeyTabletScene.h"

USING_NS_CC;

Scene* KeyTabletScene::createScene()
{
    auto scene = Scene::create();
    
    auto layer = KeyTabletScene::create();
    scene->addChild(layer);

    return scene;
}

bool KeyTabletScene::init()
{
    if ( !Layer::init() )
    {
        return false;
    }


    // Create a text field
    TextFieldTTF* textField = cocos2d::TextFieldTTF::textFieldWithPlaceHolder("Click here to type",
            cocos2d::Size(400,200),TextHAlignment::LEFT , "Arial", 42.0);
    textField->setPosition(this->getBoundingBox().getMidX(),
            this->getBoundingBox().getMaxY() - 20);
    textField->setColorSpaceHolder(Color3B::GREEN);
    textField->setDelegate(this);

    this->addChild(textField);

    // Add a touch handler to our textfield that will show a keyboard when touched
    auto touchListener = EventListenerTouchOneByOne::create();

    touchListener->onTouchBegan = [](cocos2d::Touch* touch, cocos2d::Event * event) -> bool {
        try {
            // Show the on screen keyboard
            auto textField = dynamic_cast<TextFieldTTF *>(event->getCurrentTarget());
            textField->attachWithIME();
            return true;
        }
        catch(std::bad_cast & err){
            return true;
        }
    };

    this->_eventDispatcher->addEventListenerWithSceneGraphPriority(touchListener, textField);

    return true;
}

KeyTabletScene::~KeyTabletScene() {

}

bool KeyTabletScene::onTextFieldAttachWithIME(TextFieldTTF *sender) {
    return TextFieldDelegate::onTextFieldAttachWithIME(sender);
}

bool KeyTabletScene::onTextFieldDetachWithIME(TextFieldTTF *sender) {
    return TextFieldDelegate::onTextFieldDetachWithIME(sender);
}

bool KeyTabletScene::onTextFieldInsertText(TextFieldTTF *sender, const char *text, size_t nLen) {
    return TextFieldDelegate::onTextFieldInsertText(sender, text, nLen);
}

bool KeyTabletScene::onTextFieldDeleteBackward(TextFieldTTF *sender, const char *delText, size_t nLen) {
    return TextFieldDelegate::onTextFieldDeleteBackward(sender, delText, nLen);
}

bool KeyTabletScene::onVisit(TextFieldTTF *sender, Renderer *renderer, const Mat4 &transform, uint32_t flags) {
    return TextFieldDelegate::onVisit(sender, renderer, transform, flags);
}

 

And when you run it:

TabletKeyboardShot

 

Essentially when the user touches the screen, we display the onscreen keyboard with a call to attachWithIME(), the rest is handled by the textfield.

 

I have a sneaking feeling this method is going to be depreciated at some point in the future, being replaced by cocos::ui classes, but for now it works just fine.  For the record, it is actually possible to force up the onScreen keyboard by calling Director::getInstance()->getOpenGLView()->setIMEKeyboardState(true), but it seemingly pushes your scene to the background, so isn’t a viable option for controlling a game.  I was going to look into a work around but then thought, really… this is a downright stupid thing to do.  Doing anything other than text entry with a soft keyboard is just a bad idea.

 

 

Programming


3. October 2014

 

 

In this part of the Cocos2d-x tutorial series we are going to look at how to handle touch and mouse events .  First you should be aware that by default Cocos2d-x treats a mouse left click as a touch, so if you only have simple input requirements and don’t require multi-touch support ( which is remarkably different to perform with a single mouse! ), you can simply implement just touch handlers.  This part is going to be code heavy, as we actually have 3 different tasks to cover here ( touch, multi-touch and mouse ), although all are very similar in overall behavior.

 

Let’s jump in with an ultra simple example.  Once again, I assume you’ve done the earlier tutorial parts and already have an AppDelegate.

 

Handle Touch/Click Events

 

TouchScene.h

#pragma once

#include "cocos2d.h"

class TouchScene : public cocos2d::Layer
{
public:
    static cocos2d::Scene* createScene();
    virtual bool init();  

    virtual bool onTouchBegan(cocos2d::Touch*, cocos2d::Event*);
    virtual void onTouchEnded(cocos2d::Touch*, cocos2d::Event*);
    virtual void onTouchMoved(cocos2d::Touch*, cocos2d::Event*);
    virtual void onTouchCancelled(cocos2d::Touch*, cocos2d::Event*);
    CREATE_FUNC(TouchScene);

private:
   cocos2d::Label* labelTouchInfo;
};

TouchScene.cpp

 

#include "TouchScene.h"

USING_NS_CC;

Scene* TouchScene::createScene()
{
    auto scene = Scene::create();
    auto layer = TouchScene::create();
    scene->addChild(layer);

   return scene;
}

bool TouchScene::init()
{
    if ( !Layer::init() )
    {
        return false;
    }
    
   labelTouchInfo = Label::createWithSystemFont("Touch or clicksomewhere to begin", "Arial", 30);

   labelTouchInfo->setPosition(Vec2(
      Director::getInstance()->getVisibleSize().width / 2,
      Director::getInstance()->getVisibleSize().height / 2));

   auto touchListener = EventListenerTouchOneByOne::create();

   touchListener->onTouchBegan = CC_CALLBACK_2(TouchScene::onTouchBegan, this);
   touchListener->onTouchEnded = CC_CALLBACK_2(TouchScene::onTouchEnded, this);
   touchListener->onTouchMoved = CC_CALLBACK_2(TouchScene::onTouchMoved, this);
   touchListener->onTouchCancelled = CC_CALLBACK_2(TouchScene::onTouchCancelled, this);

   _eventDispatcher->addEventListenerWithSceneGraphPriority(touchListener, this);
    
   this->addChild(labelTouchInfo);
   return true;
}

bool TouchScene::onTouchBegan(Touch* touch, Event* event)
{
   labelTouchInfo->setPosition(touch->getLocation());
   labelTouchInfo->setString("You Touched Here");
   return true;
}

void TouchScene::onTouchEnded(Touch* touch, Event* event)
{
   cocos2d::log("touch ended");
}

void TouchScene::onTouchMoved(Touch* touch, Event* event)
{
   cocos2d::log("touch moved");
}

void TouchScene::onTouchCancelled(Touch* touch, Event* event)
{
   cocos2d::log("touch cancelled");
}

 

Then if you run it, when you perform a touch or click:

image

 

As you can see, where you touch on the screen a text label is displayed.  Looking in the background of that screenshot you can see touch moved events are constantly being fired and logged.  Additionally touch ended events are fired when the user removes their finger ( or releases the mouse button ).

 

Now let’s take a quick look at the code.  Our header file is pretty straight forward.  In addition to the normal methods, we add a quartet of handler functions for handling the various possible touch events.  We also add a member variable for our Label used to draw the text on the screen.

 

In the cpp file, we create the scene like normal.  In init() we create an EventListener of type EventListenerTouchOneByOne, which predictably handles touches, um, one by one ( as opposed to all at once, which we will see later ).  We then map each possible event, touch began, touch end, touch cancelled and touch moved, to their corresponding function handler using the macro CC_CALLBACK_2, passing  the function to execute and the context ( or target ).  This too will make sense later, so hold on there.  One thing to watch out for here, and one point of confusion for me, onTouchBegan has a different signature than every other event, returning a bool.  I am not entirely certain why this one event is handled differently, seems like a bad idea to me personally, but there may be a good design reason I am unaware of.

 

The last thing we do is register our EventListener to receive events.  This is done with a call to Node’s protected member _eventListener.  We call addEventListenerWithSceneGraphPriority(), which basically means we want this event to be updated as much as possible.  We will see an example of setting a different priority level later on.

 

What's this CC_CALLBACK_2 black magic?


I'm generally not a big fan of macro usage in C++. I generally believe they lead programmers to eventually turn their libraries into meta-programming languages and ultimately obfuscate the underlying code in the name of clarity. This however, is one of the exceptions to the rule. CC_CALLBACK_2, and the entire CC_CALLBACK_ family is simply a wrapper around some standard C++ code, specifically a call to std::bind. Here is the actual macro code:

#define CC_CALLBACK_0(__selector__,__target__, ...) std::bind(&__selector__,__target__, ##__VA_ARGS__)
#define CC_CALLBACK_1(__selector__,__target__, ...) std::bind(&__selector__,__target__, 
std::placeholders::_1, ##__VA_ARGS__)
#define CC_CALLBACK_2(__selector__,__target__, ...) std::bind(&__selector__,__target__, 
std::placeholders::_1, std::placeholders::_2, ##__VA_ARGS__)
#define CC_CALLBACK_3(__selector__,__target__, ...) std::bind(&__selector__,__target__, 
std::placeholders::_1, std::placeholders::_2, std::placeholders::_3, ##__VA_ARGS__)

Basically std::bind is binding for binding parameters to a function. The std::placeholders are ultimately the number of parameters your function expects. So for example, when you call CC_CALLBACK_2, you are saying that function takes two parameters, in this case a Touch* point and an Event* pointer. Similarly CC_CALLBACK_1 would expect the provided function to take a single parameter. This kind of code is incredibly common in C++11, it's incredibly ugly, hard to read and grok and it's easy to mistype. In these cases, macro use shines. Just be aware of what it is the macro you are calling does. Each time you encounter a macro in code, I recommend you right click and "Go to Definition" or CTRL+Click if in XCode, to see what it actually does, even if it doesn't make complete sense.

 

 

In most of the touch handlers, we simply log that the event occurred.  In the event of a touch starting ( or click beginning ) we update the position of the label to where the user clicked and display the string “You Touched Here”.

 

Now let’s take a look at an example that uses lambda’s instead.  This example also goes into a bit more detail of what’s in that Touch pointer we are being passed.  The header file is basically the same, except there are no onTouch____ functions.

 

Handling Touch Events using Lambdas and dealing with Touch coordinates

 

TouchScene.cpp

#include "TouchScene.h"

USING_NS_CC;

Scene* TouchScene::createScene()
{
    auto scene = Scene::create();
    auto layer = TouchScene::create();
    scene->addChild(layer);

    return scene;
}

bool TouchScene::init()
{
    if ( !Layer::init() )
    {
        return false;
    }
    
   auto sprite = Sprite::create("HelloWorld.png");
   sprite->setPosition(Vec2(Director::getInstance()->getVisibleSize().width / 2,
      Director::getInstance()->getVisibleSize().height / 2));

    // Add a "touch" event listener to our sprite
   auto touchListener = EventListenerTouchOneByOne::create();
   touchListener->onTouchBegan = [](Touch* touch, Event* event) -> bool {

      auto bounds = event->getCurrentTarget()->getBoundingBox();

      if (bounds.containsPoint(touch->getLocation())){
         std::stringstream touchDetails;
         touchDetails << "Touched at OpenGL coordinates: " << 
            touch->getLocation().x << "," << touch->getLocation().y << std::endl <<
            "Touched at UI coordinate: " << 
            touch->getLocationInView().x << "," << touch->getLocationInView().y << std::endl <<
            "Touched at local coordinate:" <<
            event->getCurrentTarget()->convertToNodeSpace(touch->getLocation()).x << "," <<  
            event->getCurrentTarget()->convertToNodeSpace(touch->getLocation()).y << std::endl <<
            "Touch moved by:" << touch->getDelta().x << "," << touch->getDelta().y;

            MessageBox(touchDetails.str().c_str(), "Touched");
         }
      return true;
      };

   Director::getInstance()->getEventDispatcher()->addEventListenerWithSceneGraphPriority(touchListener,sprite);
   this->addChild(sprite, 0);
    
    return true;
}

 

Now when you run it:

image

 

In this example, the touch event will only fire if the user clicked on the Sprite in the scene.  Notice the first line in the onTouchBegan handler I call event->getCurrentTarget()?  This is where the context becomes important.  In the line:

Director::getInstance()->getEventDispatcher()->addEventListenerWithSceneGraphPriority(touchListener,sprite);

The second parameter, sprite, is what determines the target of the Event.  The target is passed as a Node but can be cast if required.

 

Lambda?


Lambda's are a new feature of C++ and they are probably something you will love or hate. If you come from C++ or C# you will probably find them long over due, I certainly do!
Lambda is a scary sounding expression coming from the scary looking symbol Λ. In the world of mathematics, Lambda calculus basically gives math the ability to define functions, something we as programmers can certainly appreciate. In the world of programming, it's nowhere near as scary, a lamdba expression can also be thought of as an anonymous function. In simple terms, it allows you to create a nameless function where you need it. As you can see from this example, it allows you to put event handling logic where it makes most sense, instead of spliting it out into a seperate function. It is also a godsend when you want to pass a function as a parameter, a very common task in the C++ standard libraries.
The syntax of C++ lambda's is pretty ugly, but they are certainly a valuable addition to the language. Most importantly, they can often make your code easier to express and as such, easier to comprehend and maintain. Learn to love the lamdba and the lambda will learn to love you. Maybe.

 

In this example, we use the target node to only handle clicks that happen within the bounds of our Sprite Node.  This is done by testing if the touch location is within the bounding box of the node.  If it is, we display a number of details in a message box.  Remember back in this tutorial part where I said there are multiple coordinate systems, this is a perfect example.  As you can see from the message box above, getLocation() and getLocationInView() return different values, one relative to the top left corner of the screen, while the other is relative to the bottom left corner of the screen. 

 

Sometimes as well you want to know where the click occurred relative to the node.  Such as in the sample above, the local coordinate is the position the click occurred relative to the node’s origin.    In order to calculate this location we use the helper function convertToNodeSpace().  One final thing you may notice is I registered the EventListener with Director() instead of _eventListener.  This was the old way of doing things and I did it this way for a couple reasons.  First, to show that you can.  Second, because _eventListener is a protected member, I would only have access to it if I derived my own Sprite object.

 

Now let’s take a look at a multi-touch example.

 

Dealing with Multi-touch

 

Multi-touch works pretty much the same way, just with a separate set of event handlers.  There are a few catches however.  The big one is iOS.  Out of the box, Android just works.  iOS however requires you to make a small code change to enable multitouch support.  Don’t worry, it’s a simple process. 

 

In your project, locate the directory /proj.ios_mac/ios and open the file AppController.mm.  Then add the following line:

AppControllerMM

 

Simply add the line [eaglView setMultipleTouchEnabled:YES]; somewhere after the creation of eaglView.  Now multitouch should work in your iOS application, let’s look at some code:

 

MultiTouchScene.h

#pragma once

#include "cocos2d.h"

class MultiTouch : public cocos2d::Layer
{

    public:
        static cocos2d::Scene* createScene();

        virtual bool init();
        CREATE_FUNC(MultiTouch);
    private:
        const static int MAX_TOUCHES = 5;

    protected:
        cocos2d::Label* labelTouchLocations[MAX_TOUCHES];

};

 

MultiTouchScene.cpp

#include "MultiTouchScene.h"

USING_NS_CC;

Scene* MultiTouch::createScene()
{
    auto scene = Scene::create();
    auto layer = MultiTouch::create();
    scene->addChild(layer);

    return scene;
}

bool MultiTouch::init()
{
    if ( !Layer::init() )
    {
        return false;
    }

    // Create an array of Labels to display touch locations and add them to this node, defaulted to invisible
    for(int i= 0; i < MAX_TOUCHES; ++i) {
        labelTouchLocations[i] = Label::createWithSystemFont("", "Arial", 42);
        labelTouchLocations[i]->setVisible(false);
        this->addChild(labelTouchLocations[i]);
    }

    auto eventListener = EventListenerTouchAllAtOnce::create();

    //  Create an eventListener to handle multiple touches, using a lambda, cause baby, it's C++11
    eventListener->onTouchesBegan = [=](const std::vector<Touch*>&touches, Event* event){

        // Clear all visible touches just in case there are less fingers touching than last time
        std::for_each(labelTouchLocations,labelTouchLocations+MAX_TOUCHES,[](Label* touchLabel){
            touchLabel->setVisible(false);
        });

        // For each touch in the touches vector, set a Label to display at it's location and make it visible
        for(int i = 0; i < touches.size(); ++i){
            labelTouchLocations[i]->setPosition(touches[i]->getLocation());
            labelTouchLocations[i]->setVisible(true);
            labelTouchLocations[i]->setString("Touched");
        }
    };

    _eventDispatcher->addEventListenerWithSceneGraphPriority(eventListener, this);

    return true;
}

 

Here is the code running on my iPad with multiple fingers touched:

IMG_0189

 

Granted, not the most exciting screen shot ever, but as you can see, each location the user touch, a label is printed.  Let’s take a quick look at the code and see what’s happening.  At this point, most of it should be pretty familiar, so let’s just focus on the differences.

 

First you will notice I added an array of Labels MAX_TOUCH in size.  I chose 5 as frankly, that seems to be the limit of what I could register on iPad.  I had it set to 10, but it never registered more than 5, so 5 it was!  Truth of the matter is, I can’t really imagine a control scheme that used more then 5 touches being all that useful, so 5 touches seems like a reasonable limitation, even though I’m pretty certain the hardware can handle more.

 

In our init() we start off by allocating each of our labels and setting their initial visibility to invisible.  Then we create our EventListener, this time we create an EventListenerTouchAllAtOnce because we want to, well, get all the touch events at the same time.  Instead of handling onTouchBegan, we instead handle onTouchesBegan, which takes a std::vector ( careful here, as cocos2d has it’s own vector class… the peril of using namespace abuse! ) of Touch* as well as an Event*.

 

In the event of touch(es), we first loop through all of our labels and set them to invisible.  Then for each touch in the touches vector, we move a label to that position and make it visible.  Once again we register the EventListener with our node’s _eventDispatcher.

 

So, we’ve covered touch and multi-touch, what about when you want to use the mouse?  Amazingly enough there are users out there with mice with more than a single button after all! ;)

 

Handling the Mouse

 

At this point you can probably guess the code I am about to write, as the process is remarkably similar, but let’s go through it anyways.  I wont bother with the .h file, there’s nothing special in there.

 

MouseScene.cpp

#include "MouseScene.h"

USING_NS_CC;

cocos2d::Scene* MouseScene::createScene()
{
    auto scene = Scene::create();
    auto layer = MouseScene::create();
    scene->addChild(layer);

    return scene;
}

bool MouseScene::init()
{
    if ( !Layer::init() )
    {
        return false;
    }   

   auto listener = EventListenerMouse::create();
   listener->onMouseDown = [](cocos2d::Event* event){

      try {
         EventMouse* mouseEvent = dynamic_cast<EventMouse*>(event);
         mouseEvent->getMouseButton();
         std::stringstream message;
         message << "Mouse event: Button: " << mouseEvent->getMouseButton() << "pressed at point (" <<
            mouseEvent->getLocation().x << "," << mouseEvent->getLocation().y << ")";
         MessageBox(message.str().c_str(), "Mouse Event Details");

      }
      catch (std::bad_cast& e){
         // Not sure what kind of event you passed us cocos, but it was the wrong one
         return;
      }
   };

   listener->onMouseMove = [](cocos2d::Event* event){
      // Cast Event to EventMouse for position details like above
      cocos2d::log("Mouse moved event");
   };

   listener->onMouseScroll = [](cocos2d::Event* event){
      cocos2d::log("Mouse wheel scrolled");
   };

   listener->onMouseUp = [](cocos2d::Event* event){
      cocos2d::log("Mouse button released");
   };

   _eventDispatcher->addEventListenerWithFixedPriority(listener, 1);

    return true;
}

 

Now run it, scroll the mouse wheel a couple times, click and you will see:

image

 

Yeah… not really exciting either.  As you can see, when you click a mouse the button is returned as a number.  Left button is 0, middle is 1, right is 2, etc.  The code is all very familiar except we use a EventListenerMouse this time and handle onMouseDown, onMouseUp, onMouseMove and onMouseScroll.  The only other thing of note is you need to cast the provided Event pointer to a EventMouse pointer to get access to the mouse details.

 

With the exception of gestures, that should pretty much cover all of your mouse and touch needs.  Gesture’s arent actually supported out of the box, but extensions exist.  Additionally, all mouse and touch events contain delta information as well as data on the previous touch/click, which should make rolling your own fairly simple.

 

Programming


GFS On YouTube

See More Tutorials on DevGa.me!

Month List