Subscribe to GameFromScratch on YouTube Support GameFromScratch on Patreon

10. January 2017

 

Carmel is a new WebVR browser available for the Oculus Gear VR in preview form.  WebVR is an attempt to bring Virtual Reality to the web browser, while Carmel is their mobile browser supporting it.  You can learn more about the VR web here. In a nutshell, WebVR is going to enable you to write VR experiences in a similar experience to creating dynamic web pages today.  To help with that, Oculus have released the Carmel VR Starter Kit.  It’s a set of samples and examples to get you up and running in WebVR for the Carmel browser. 

 

You can download the starter kit on Github and getting started is easy assuming you have Node.js installed.

Running Samples

First, run npm install to get the npm dependencies used for hosting the samples locally.

Run npm start to start a local http server on port 8000.

You can navigate to, http://:8000/index.html, to access the samples on a Gear VR enabled mobile device.

The top category of links will launch each sample into the Carmel Technical Preview.

The bottom category of links can be used to launch each sample directly in your browser if the sample supports rendering monoscopically.

 

Here is a look at the Hello World example’s code:

<!DOCTYPE html>
<!--
  Copyright 2016-present, Oculus VR, LLC.
  All rights reserved.

  This source code is licensed under the license found in the
  LICENSE-examples file in the root directory of this source tree.
-->
<html>
  <head>
    <title>Hello WebVR</title>
    <style>
      body {
        margin: 0;
      }
      canvas {
        position: absolute;
        width: 100%;
        height: 100%;
      }
      #messages {
        position: absolute;
        color: white;
        width: 100%;
        height: 100%;
      }
    </style>
    <script>
      var vrDisplay;      // The VRDisplay we will present to, discovered from 
      getVRDisplays
      var frameData;      // HMD information, populated each frame by 
      getFrameData
      var layerSource;    // The source of the VRLayer passed to requestPresent, 
      our canvas element.

      var gl;             // The webgl context of the canvas element, used to 
      render the scene
      var quadProgram;    // The WebGLProgram we will create, a simple quad 
      rendering program
      var attribs;        // A map of shader attributes to their location in the 
      program
      var uniforms;       // A map of shader uniforms to their location in the 
      program
      var vertBuffer;     // Vertex buffer used for rendering the scene
      var texture;        // The texture that will be bound to the diffuse 
      sampler
      var quadModelMat;   // The quad's model matrix which we will animate

      // This is the entrypoint to this sample and where we attempt to begin VR 
      presentation
      function requestPresent() {
        // First, initialize our WebGL program for rendering a simple quad
        initWebGLProgram();

        // Next, we will get the first VRDisplay that is available and try to 
        requestPresent.
        // If VR is unavailable or we aren't able to present, we will simply 
        display an HTML message in the page.
        if (navigator.getVRDisplays) {
          navigator.getVRDisplays().then(function (displays) {
            if (displays.length > 0) {
              // We reuse this every frame to avoid generating garbage
              frameData = new VRFrameData();

              vrDisplay = displays[0];

              // We must adjust the canvas (our VRLayer source) to match the 
              VRDisplay
              var leftEye = vrDisplay.getEyeParameters("left");
              var rightEye = vrDisplay.getEyeParameters("right");

              // This layer source is a canvas so we will update its width and 
              height based on the eye parameters.
              // For simplicity we will render each eye at the same resolution
              layerSource.width = Math.max(leftEye.renderWidth, rightEye.
              renderWidth) * 2;
              layerSource.height = Math.max(leftEye.renderHeight, rightEye.
              renderHeight);

              // This can normally only be called in response to a user gesture.
              // In Carmel, we can begin presenting the VR scene right away.
              vrDisplay.requestPresent([{ source: layerSource }]).then(function (
              ) {
                // Start our render loop, which is synchronized with the 
                VRDisplay refresh rate
                vrDisplay.requestAnimationFrame(onAnimationFrame);
              }).catch(function (err) {
                // The Carmel Developer preview allows entry into VR at any time 
                because it is a VR first experience.
                // Other browsers will only allow this to succeed if called in 
                response to user interaction, such as a click or tap though.
                // We expect this to fail outside of Carmel and would present 
                the user with an "Enter VR" button of some sort instead.
                addHTMLMessage("Failed to requestPresent.");
              });
            } else {
              // Usually you would want to hook the vrdisplayconnect event and 
              only try to request present then.
              addHTMLMessage("There are no VR displays connected.");
            }
          }).catch(function (err) {
            addHTMLMessage("VR Displays are not accessible in this context.  
            Perhaps you are in an iframe without the allowvr attribute specified.
            ");
          });
        } else {
          addHTMLMessage("WebVR is not supported on this browser.");
          addHTMLMessage("To support progressive enhancement your fallback code 
          should render a normal Canvas based WebGL experience for the user.");
        }
      }

      // Once we are presenting this will get called any time a new frame should 
      be rendered on the VRDisplay
      // The timestamp passed to our callback is the current DOMHighResTimeStamp 
      at the start of the frame.
      // We can use the timestamp to update our scene and perform animations in 
      a framerate independent way.
      function onAnimationFrame(timestamp) {
        // Continue to request frames to keep the render loop going
        vrDisplay.requestAnimationFrame(onAnimationFrame);

        // Clear the layer source - we do this outside of render to avoid 
        clearing twice
        gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);

        // Update the scene once per frame
        update(timestamp);

        // Get the current pose data
        vrDisplay.getFrameData(frameData);

        // Render the left eye
        gl.viewport(0, 0, layerSource.width * 0.5, layerSource.height);
        render(frameData.leftProjectionMatrix, frameData.leftViewMatrix);

        // Render the right eye
        gl.viewport(layerSource.width * 0.5, 0, layerSource.width * 0.5, 
        layerSource.height);
        render(frameData.rightProjectionMatrix, frameData.rightViewMatrix);

        // Submit the newly rendered layer to be presented by the VRDisplay
        vrDisplay.submitFrame();
      }

      function update(timestamp) {
        // Animate the z location of the quad based on the current frame 
        timestamp
        var oscillationSpeed =  Math.PI / 2;
        var z = -1 + Math.cos(oscillationSpeed * timestamp / 1000);
        quadModelMat[14] = z
      }

      // For VR, it's important that your render method is parameterized by the 
      camera
      // (projection and view matrices) so that it can be used to render from 
      each
      // eye's perspective
      function render(projectionMat, viewMat) {
        gl.useProgram(quadProgram);

        // The view and projection uniforms are passed in and are different for 
        the left eye and right eye
        gl.uniformMatrix4fv(uniforms.projectionMat, false, projectionMat);
        gl.uniformMatrix4fv(uniforms.viewMat, false, viewMat);

        // The remainder of our rendering is the same for both eyes now that 
        view and projection have been set up.
        gl.uniformMatrix4fv(uniforms.modelMat, false, quadModelMat);

        gl.bindBuffer(gl.ARRAY_BUFFER, vertBuffer);

        gl.enableVertexAttribArray(attribs.position);
        gl.enableVertexAttribArray(attribs.texCoord);

        gl.vertexAttribPointer(attribs.position, 3, gl.FLOAT, false, 20, 0);
        gl.vertexAttribPointer(attribs.texCoord, 2, gl.FLOAT, false, 20, 12);

        gl.activeTexture(gl.TEXTURE0);
        gl.uniform1i(uniforms.diffuse, 0);
        gl.bindTexture(gl.TEXTURE_2D, texture);

        gl.drawArrays(gl.TRIANGLE_FAN, 0, 4);
      }

      function initWebGLProgram() {
        layerSource =  document.getElementById("webgl-canvas");

        var glAttribs = {
          alpha: false,                   // The canvas will not contain an 
          alpha channel
          antialias: true,                // We want the canvas to perform anti-
          aliasing
          preserveDrawingBuffer: false    // We don't want our drawing to be 
          retained between frames, we will fully rerender each frame.
        };

        // You should also check for "experimental-webgl" when implementing 
        support for canvas based WebGL fallback when VR is not available.
        gl = layerSource.getContext("webgl", glAttribs);

        var quadVS = [
          "uniform mat4 projectionMat;",
          "uniform mat4 viewMat;",
          "uniform mat4 modelMat;",
          "attribute vec3 position;",
          "attribute vec2 texCoord;",
          "varying vec2 vTexCoord;",

          "void main() {",
          "  vTexCoord = texCoord;",
          "  gl_Position = projectionMat * viewMat * modelMat * vec4(position, 1.
          0);",
          "}",
        ].join("\n");

        var quadFS = [
          "precision mediump float;",
          "uniform sampler2D diffuse;",
          "varying vec2 vTexCoord;",

          "void main() {",
          "  gl_FragColor = texture2D(diffuse, vTexCoord);",
          "}",
        ].join("\n");

        quadProgram = gl.createProgram();

        var vertexShader = gl.createShader(gl.VERTEX_SHADER);
        gl.attachShader(quadProgram, vertexShader);
        gl.shaderSource(vertexShader, quadVS);
        gl.compileShader(vertexShader);

        var fragmentShader = gl.createShader(gl.FRAGMENT_SHADER);
        gl.attachShader(quadProgram, fragmentShader);
        gl.shaderSource(fragmentShader, quadFS);
        gl.compileShader(fragmentShader);

        attribs = {
          position: 0,
          texCoord: 1
        };

        gl.bindAttribLocation(quadProgram, attribs.position, "position");
        gl.bindAttribLocation(quadProgram, attribs.texCoord, "texCoord");

        gl.linkProgram(quadProgram);

        uniforms = {
          projectionMat: gl.getUniformLocation(quadProgram, "projectionMat"),
          modelMat: gl.getUniformLocation(quadProgram, "modelMat"),
          viewMat: gl.getUniformLocation(quadProgram, "viewMat"),
          diffuse: gl.getUniformLocation(quadProgram, "diffuse")
        };

        var size = 0.2;
        var quadVerts = [];

        var x = 0;
        var y = 0;
        var z = -1;
        quadVerts.push(x - size, y - size, z + size, 0.0, 1.0);
        quadVerts.push(x + size, y - size, z + size, 1.0, 1.0);
        quadVerts.push(x + size, y + size, z + size, 1.0, 0.0);
        quadVerts.push(x - size, y + size, z + size, 0.0, 0.0);

        vertBuffer = gl.createBuffer();
        gl.bindBuffer(gl.ARRAY_BUFFER, vertBuffer);
        gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(quadVerts), gl.
        STATIC_DRAW);

        quadModelMat = new Float32Array([
          1, 0, 0, 0,
          0, 1, 0, 0,
          0, 0, 1, 0,
          0, 0, 0, 1
        ]);

        texture = gl.createTexture();

        var image = new Image();

        // When the image is loaded, we will copy it to the GL texture
        image.addEventListener("load", function() {
          gl.bindTexture(gl.TEXTURE_2D, texture);
          gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, 
          image);

          gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR);
          gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.
          LINEAR_MIPMAP_NEAREST);

          // To avoid bad aliasing artifacts we will generate mip maps to use 
          when rendering this texture at various distances
          gl.generateMipmap(gl.TEXTURE_2D);
        }, false);

        // Start loading the image
        image.src = "../assets/cube-sea.png";
      }

      function addHTMLMessage(msgText) {
        var message = document.createElement("div");
        message.innerHTML = msgText;
        document.getElementById("messages").appendChild(message);
      }
    </script>
  </head>
  <body onload="requestPresent()">
    <canvas id="webgl-canvas"></canvas>
    <div id="messages"></div>
  </body>
</html>

News

6. January 2017

 

Back in September I reported on the impending end to development of the cross platform C++ game framework Marmalade.  It was slated to see it’s final release in March of 2017.  According to an email I received earlier today, it seems a last minute savior has stepped in, GMO Cloud k.k.  Not a traditional gaming company, GMO Cloud is a Japanese group of companies known primarily for cloud hosting and security solutions, perhaps most famously in the west as the parent company of GlobalSign, one of the oldest SSL certificate companies in existence.  While not a gaming company, it does appear that they intend to continue development of Marmalade and to continue supporting Marmalade developers.  Excerpt from their press release:

GMO Cloud K.K. (President and CEO: Mitsuru Aoyama, “GMO Cloud”), a member of the GMO Internet Group, was granted the exclusive rights to use the Marmalade software development kit (SDK) by Marmalade Technologies Ltd (Headquarters: London, UK, “Marmalade Technologies”) on November 24, 2016. The Marmalade SDK features cross platform support.


 This license will allow GMO Cloud to accelerate the development of and the ability to offer new features for the Marmalade SDK. GMO Cloud wants to increase its ability to support the development of games and apps in both Japan and abroad in order to expand and enhance sales and support throughout the world.

Followed by

To help meet these needs, GMO Cloud obtained a license to resell the cross platform SDK “Marmalade,” which enables the development of games and apps compiled from a single source code. They released the SDK on July 1, 2015 and currently offer it to game developers in Japan.


 “We are very excited about GMO managing the development and distribution of the Marmalade SDK to enable our customers to continue building compelling content with the high performance and flexible C++ framework that the Marmalade SDK delivers” said Bruce Beckloff chairman and CEO Marmalade Technologies Ltd.


 GMO Cloud believes that by performing everything from development to provision of the SDK that it had previously only sold, it will be able to further stimulate the development of games and apps in Japan and around the world. The company has thus been transferred exclusive rights to use Marmalade from Marmalade Technologies. In addition to selling it in the Japanese and overseas markets, GMO Cloud will develop new features for this tool that will make developers’ lives easier. The company will also enhance the sales and support that it offers in Japan and elsewhere to make Marmalade a more popular SDK, continuing to support the evolution of the game and app development market.

 

You can read the full press release here.  Certainly good news for existing Marmalade developers.  Will be interesting to see what the future holds.

GameDev News

4. January 2017

 

Inkscape is a popular open source vector based graphics package available for Windows and Linux.  The OS X version is not yet available for reasons explained here.  Version 0.92 adds a few new features including mesh gradients, new live path effects and more.  Below is a summary of the new features, although much more detailed release notes are available here.

Exciting New Features

Mesh gradients are a powerful tool enabling artists to more easily create photo-realistic drawings - a feature we would love to see made part of the W3C's Scalable Vector Graphics (SVG) standard; if you find this feature as cool as we do, please request that your favorite web browser adopts support for it too!

Inkscape works closely with the SVG standards committee, and thanks to our many generous donors, the Inkscape project sponsors an engineering representative to attend SVG working group meetings over the past few years. A welcome outcome of this participation is the improvement and addition of over a dozen SVG and CSS properties.

Live Path Effects are proving to be a vibrant ecosystem in the Inkscape project, and many innovative new ideas have been percolating over the past couple years since our last release. Spiro Live, BSpline, and Roughen essentially provide new drawing modes. The Simplify LPE cleans up vector elements non-destructively by smoothing paths, shapes, groups, clips, and masks. Perspective/Envelope and Lattice Deformation 2 enable artists to interactively deform/transform drawing elements. Other new features enable interactive mirroring, interactive rotation of copies along an arc or circle, and dozens of other additions and enhancements.

 

Below is a video showing Inkscape 0.92 in action.  You can learn more about and download Inkscape here.

GameDev News

4. January 2017

 

Today marks the release of Cocos2d-x 3.14.  Cocos2d-x is a popular open source cross platform C++ game library that was used to make games such as BADLAND, Clash of Kings and Final Fantasy Record.  If you want to learn more about Cocos2d-x, we have a complete tutorial series available here.

 

Cocos2d-x 3.14 brings several fixes and improvements including:

  • [NEW] Add Spine binary file format support
  • [NEW] Action: add a method to get the number of actions running in a given node with specific tag
  • [NEW] Action: new actions: ResizeBy and ResizeTo
  • [NEW] Button: can set title label
  • [NEW] Can disable multi touch on Android
  • [NEW] EventDispatcher: Add hasEventListener to check listener existance
  • [NEW] EditBox: add horizontal text alignment
  • [NEW] EventDispatcher: added hasEvent() to check if an event is added
  • [NEW] Sprite: support slice9 feature
  • [NEW] Slider: add methods to get _slidBallNormalRenderer
  • [NEW] Desktop: add a method to toggle between fullscreen and windowed
  • [NEW] Desktop: add events for window resize, focus and unfocus
  • [NEW] Mac: supports game controller
  • [NEW] JSB: add cc.sys.now() and perfromance.now(), the last one is more accurate
  • [NEW] Lua: add cc.vec3 functions: add, sub and dot
  • [NEW] Lua: use luajit 2.1.0-beta2
  • [NEW] Web: Add cc.CONCURRENCY_HTTP_REQUEST_COUNT to control max concurrent task count for XMLHttpRequest

 

More information about the release is available here, while Cocos2d-x is available for download here.

GameDev News

2. January 2017

 

In our previous tutorial in the BabylonJS Tutorial Series we covered positioning a camera in our world.  There were still a few fundamental components missing, the top of which is lighting which we are going to cover today.  Lights are used to, predictably enough, illuminate your scene.  They interact with the color and materials on your various entities that compose your scene.  There are multiple different light types available in BabylonJS including the Point Light, Directional Light, Spot Light and Hemispherical light.  A point light is a single light source that radiates in all directions, like a naked lightbulb for example.  A directional light in a radiates just in the direction it is pointed and it goes on forever with no fall off basically illuminating everything in its path regardless to distance.  A spot light is similar to a directional light but it does fall off over a given distance and is cone shaped.  A flashlight is a classic example of a spot light, as of course is a spot (or search) light!.  A hemispherical light is generally used to represent an ever present ambient light source, the sun being perhaps the most common example.  You can also emit light from textures using their emission property, but we will cover that at a later point.  In this tutorial we are going to implement a point and a spot light.

 

There is an HD video version of this tutorial available here.

 

Let’s start with a point light.  It’s a simple light that radiates from a single point (thus the name) in all directions.

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Title</title>
    <script src="../Common/Lib/babylon.max.js"></script>

    <style>

        #canvas {
            width:100%;
            height:100%;
        }
    </style>
</head>
<body>
<canvas id="canvas"></canvas>
<script>
    window.addEventListener('DOMContentLoaded', function(){
        var canvas = document.getElementById('canvas');

        var engine = new BABYLON.Engine(canvas, true);

        var createScene = function(){
            var scene = new BABYLON.Scene(engine);
            scene.clearColor = new BABYLON.Color3.White();

            var box = BABYLON.Mesh.CreateBox("Box",4.0,scene);
            var camera = new BABYLON.ArcRotateCamera("arcCam",
                    BABYLON.Tools.ToRadians(45),
                    BABYLON.Tools.ToRadians(45),
                    10.0,box.position,scene);
            camera.attachControl(canvas,true);

            var light = new BABYLON.PointLight("pointLight",new BABYLON.Vector3(
            0,10,0),scene);
            light.diffuse = new BABYLON.Color3(1,0,0);


            scene.actionManager = new BABYLON.ActionManager(scene);
            scene.actionManager.registerAction(
                    new BABYLON.ExecuteCodeAction({ trigger:
                            BABYLON.ActionManager.OnKeyUpTrigger, parameter: " " 
                            },
                            function () {
                                light.setEnabled(!light.isEnabled());
                            }
                    ));

            return scene;
        }

        var scene = createScene();
        engine.runRenderLoop(function(){
            var light = scene.getLightByName("pointLight");
            light.diffuse.g += 0.01;
            light.diffuse.b += 0.01;
            scene.render();
        });

    });
</script>
</body>
</html>

 

There are a couple things illustrated in this example.  Creating a point light is done by calling new BABYLON.PointLight(), passing in the ID of the light, the position of the light in the world and finally the scene in which the light exists.  You can set the color of the light by setting it’s diffuse property, in this case we set it to full red only.  You will notice this example also shows a new concept in BabylonJS, the ActionManager.  This is a way of wiring code to specific events.  In this case we add some code that will be fired when the space key is pressed.  That function simply turns off and on the light source by calling setEnabled() passing a true or false value.  In the render loop we also slowly increase the lights green and blue components, so you can see the effect of diffuse lighting on the scene.  When you run this code you should see:

GIF

 

Lights are implemented as part of the GLSL shader process and the active lights in the scene are passed to each StandardMaterial in the scene.  By default the standard material is limited to a maximum of four active lights.  This value can be overridden using the maxSimultaneousLights property of the StandardMaterial, although this may have some impact on performance, especially on mobile targets.

 

Next lets look at implementing a spot light.  As with all things BabylonJS, the process is quite similar:

<script>
    window.addEventListener('DOMContentLoaded', function(){
        var canvas = document.getElementById('canvas');

        var engine = new BABYLON.Engine(canvas, true);

        var createScene = function(){
            var scene = new BABYLON.Scene(engine);
            scene.clearColor = new BABYLON.Color3.White();

            var box = BABYLON.Mesh.CreateBox("Box",4.0,scene);
            var camera = new BABYLON.ArcRotateCamera("arcCam",
                    BABYLON.Tools.ToRadians(45),
                    BABYLON.Tools.ToRadians(45),
                    10.0,box.position,scene);
            camera.attachControl(canvas,true);

            var light = new BABYLON.SpotLight("spotLight",new BABYLON.Vector3(0,
            10,0),new BABYLON.Vector3(0,-1,0),
                    BABYLON.Tools.ToRadians(45), // degrees the light fans out
                    0.1, // falloff/decay of the light over distance
                    scene);

            return scene;
        }

        var scene = createScene();
        engine.runRenderLoop(function(){
            var light = scene.getLightByName("spotLight");
            light.position.y -= 0.01;
            scene.render();
        });

    });
</script>

 

In this example we create the spot light with a call to new BABYLON.SpotLight, passing in the id, position, direction vector, the degrees or arc of the light cone, the rate the light falls off over distance and finally the scene to create the light in.  In this example instead of changing the color of the light each frame, we instead move it slightly.  Run this code and you should see:

GIF2

 

As the light is pulled back the fall off cone is quite prominently displayed.  Of course the lack of textures makes this example more than a bit stark, so that is what we will cover in the next tutorial.

 

The Video

Programming , , , ,

Month List

Popular Comments

Atari brings classics to HTML5 and releases the developer libraries and an arcade
Subscribe to GameFromScratch on YouTube Support GameFromScratch on Patreon


30. August 2012

 

In celebration of their 40th anniversary, Atari has re-released a number of their classic games as HTML5 in their newly launched web arcade.  Each of the titles has received a facelift, and the list includes:

  • Asteroids
  • Centipede
  • Combat
  • Lunar Lander
  • Missile Command
  • Pong
  • Super Breakout
  • Yar’s Revenge

 

 

As you can see, the games have received a facelift:

 

Asteroids:

image

 

Yar’s Revenge:

image

 

 

 

The project is a team up between Atari, CreateJS and Microsoft.  The Microsoft connection is Internet Explorer 10, which allows you to view the arcade ad free.  Atari is releasing an SDK for publishing on their arcade, the download and documentation page is currently down, so details are a bit sparse right now.  Their quick start pdf is currently available and gives a glimpse into the process. Presumably the arcade would work on a revenue sharing scheme, but that is just guesswork at the moment.

 

The library used to create all the games is called CreateJS, and is a bundling of HTML5 libraries including:

EaselJS – a HTML5 Canvas library with a Flash like API

TweenJS – a chainable tweening library

SoundJS – a HTML5 audio library

PreLoadJS – an asset loading and caching library

 

Plus the newly added tool, Zoe.  Zoe is a tool that takes SWF Flash animations and generates sprite sheets.

 

 

I look forward to looking in to Atari’s new API once their documentation page is back online.  Atari has also created a GitHub repository to support the project, but it is currently a little sparse.  In their own words:

 

Welcome to the Atari Arcade SDK.

This is the initial release of the SDK, which we hope to evolve over the next few weeks, adding
* more documentation
* examples
* updates

This repository contains
* Atari Arcade SDK classes in scripts/libs
* scripts necessary to run the SDK locally, in scripts/min
* API documentation and a quick start guide in docs/
* A test harness page to bootstrap and launch games

 

 

All told, a pretty cool project.  At the very least, check out the arcade, it’s a great deal of fun.

 

General ,

blog comments powered by Disqus

Month List

Popular Comments