Subscribe to GameFromScratch on YouTube Support GameFromScratch on Patreon

27. April 2017

 

Unreal Engine have been a bit quite on releases lately, however they just dropped a preview of their upcoming massive release.  Of course it is just a preview so I wouldn’t recommend using this release in a production environment.  There are just a massive number of improvements and new features in this release however including an all new audio engine, the addition of volumetric fog, a new cloth simulation, HTML5 support of WebAssembly and much more.

 

Details of the release from the Unreal Engine forums:

 

  • Animation/Physics Updates:
    • A new clothing solver from Nvidia called NvCloth replaces APEX. This new solver gives more control over simulation and it is pretty close to the core solver of the previous APEX solution so should behave largely the same, although there are a few slight behavior changes and some extra exposed parameters for inertia settings. We now also always simulate clothing in local space - this gives us the benefit of only having one transform path through the solver and allows external inertia to be tweaked in much finer detail.
    • Vertex & Texture Painting on Skeletal Meshes has been added.
    • Vertex & Texture Painting on Static Meshes has been overhauled to improve usability.
    • A Spline IK node has been added to Animation Blueprints; useful for character spines.
    • Animation Modifiers are a new type of class which allow developers to apply a sequence of actions to a given animation sequence or skeleton. They should always implement OnApply and OnRevert to allow modifying data and remove previously applied changes.
      • Accessing Animation Modifiers is done through a new tab in the skeleton and animation editor.
      • A new set of functions to access specific animation data has been added, they’re contained by a new editor-only function library (Animation blueprint Library).
    • The RigidBody node supports simulating a physics asset inside an Animation Blueprint. This is similar to AnimDynamics, but instead of specifying a node per bone a single node is used for the entire skeletal mesh. Performance has also been improved.
    • PoseDriver has been improved, including more control over selected and modified bones, curve control, and UI improvements.
    • The ability to have Kinematic bodies with simulated parents has been added, allowing effects like a kinematic hand attached to other objects.
    • The 'Look At' node has been improved to be usable relative to a bone or socket.
    • Capsule collision can now be imported from an FBX file. You now use the ‘UCP’ prefix on a capsule poly mesh, and it will be removed on import, and replaced by a corresponding capsule collision shape.
    • The ability to store asset viewer profiles on a 'shared' or 'local' level has been added. This allows team to have a shared set of profiles which can be used as a unified scene to assess art assets. Shared profiles are stored in DefaultEditor.ini and will require you to check out or make it writable.
    • Added support to import Retarget Base Pose. A new option allows you to import the pose from Pose Asset.
    • Animation Export improvements support run-time retargeting results and animation that applies Post Process Graph.
    • Add ‘GetMaterialFromFaceIndex’ function for components retrieves the actual material after performing a complex line trace. This is supported for StaticMeshes, ProceduralMeshes and BSP.
    • The Play Montage Blueprint is an Async node, that can be used in any Blueprint logic to play Anim Montages, and have easy access to some of the callback events it provides.
  • Rendering Updates:
    • Volumetric Fog is now supported. Please follow this link to provide feeback.
      • Volumetric Fog controls are on the Exponential Height Fog Component.
      • Localized fog density can be controlled via particles. To do so, create a Material, set the material's domain to Volume, and apply it to a particle emitter. The volume material describes Albedo, Emissive and Extinction for a given point in space. Albedo is in the range [0-1] while Emissive and Extinction are world space densities with any value greater than 0. Volume materials currently only work on particles, and only positions inside the particle's radius are valid.
      • Each light has a 'Volumetric Scattering Intensity' and 'Cast Volumetric Shadow' setting. (fast-changing lights like flashlights and muzzle flashes leave lighting trails. Disable volumetric fog contribution on these lights with 'Volumetric Scattering Intensity' set to 0)
      • Supported lights include:
        • A single Directional Light, with shadowing from Cascaded Shadow Maps or static shadowing, with a Light Function
        • Any number of point and spot lights, with dynamic or static shadowing if 'Cast Volumetric Shadow' is enabled.
        • A single skylight, with shadowing from Distance Field Ambient Occlusion if enabled
        • Particle lights, if 'Volumetric Scattering Intensity' is greater than 0
      • Not currently supported:
        • Precomputed global illumination
        • Shadowing of Stationary skylights
        • IES profiles and Light Functions on point and spot lights
        • Shadowing from Ray Traced Distance Field Shadows
        • Shadowing from the volumetric fog itself
      • The GPU cost of Volumetric Fog is primarily controlled by the volume texture resolution, which is set from the Engine Shadow Scalability level. Use 'profilegpu' to inspect this cost.
    • Distance Field Lighting has been optimized for current generation consoles and mid-spec PC.
      • The Global Distance Field is much faster to update when a Movable object changes.
      • New project settings ‘Eight Bit Mesh Distance Fields’ and ‘Compress Mesh Distance Fields’ which significantly reduce memory requirements when enabled
      • Runtime cost has been reduced between 30-50%
      • Mesh Distance Field generation uses Intel’s Embree ray tracing library, for a 2.5x speedup
    • Vertex Interpolator nodes have been added to the material graph. These nodes offer better control for value interpolation between vertex and pixel work. These are intended as a workflow improvement, there are no changes to interpolator limits nor will shaders change. The feature is compatible with Customized UVs and will pack results together.
    • Post processing now supports Image-based (FFT) convolution for physically realistic bloom effects in addition to the existing bloom method. This new post processing feature is designed for use in cinematics or on high-end hardware.
      • Image based Convolution adds new control parameters to the existing Lens | Bloom section found in Post Process volumes.
      • In using a new texture to serve as the kernel image, it is important to ensure that the full image is present on the gpu and available at full resolution. Two settings are required to ensure this:
        • Mips Gen Setting should be set to NoMipmaps
        • Texture option Never Stream should be selected.
  • Core Updates:
    • Garbage Collection improvements make collection times twice as fast.
    • Removed support for Visual Studio 2013. VS 2015 and VS 2017 are currently supported.
  • Tools Updates:
    • Localized String Tables are now supported. These provide a way to centralize your localized text into one (or several) known locations, and then reference the entries within a string table from other assets or code in a robust way that allows for easy re-use of localized text. String Tables can be defined in C++, loaded via CSV file, or created as an asset.
    • Color Grading improvements include polished UI, a new HSV mode, dynamically changing the min/max value of sliders using Ctrl+drag, and a new Reset button.
    • The Asset Audit Window, built on top of the experimental Asset Management Framework, can be used to audit disk size, memory usage, and general asset properties for many assets at once. It is a specialized version of the Content Browser, and can be accessed from the Window->Developer Tools menu, or from the right click menu in the Content Browser or Reference Viewer. Once you have opened the window assets can be added using the buttons, and platform data loaded out of cooked asset registries can be loaded using the platform drop down.
    • Updates to VR Mode's UI and interation include a new asymmetrical controller setup where you can change which controller has the interaction laser in Editor Preferences > VR Mode, and an updated Radial Menu with access to all major editor features and UI panels. Additional updates include smoothed interaction lasers and improved teleportation.
    • VR Mode now has access to the Sequencer Editor
    • Physics simulation is now possible in VR mode
    • Smart Snapping in VR Mode uses the bounds of your object to align to other actors in the scene, enabling you to exactly fit them together without needing to build modular assets with a grid in mind.
  • Sequencer Updates:
    • Shot enhancements include pre/post roll and hierarchical bias.
    • UI enchancements include re-sizable tracks and audio thumbnails now render the peak samples with an inner RMS curve
    • Material Parameter Collection Tracks have been added
    • Various minor improvements including Binding Override improvements, additional event receivers on Level Sequence Actors, event ordering, and more.
  • Audio Updates:
    • The new Unreal Audio Engine is available for testing on PC, Mac, iOS, and Android. It is not enabled by default in 4.16 as there is continued work on implementing backends for console platforms, Linux, and HTML5, as well as stability and performance improvements, especially on mobile platforms. It is enabled for each platform by changing the AudioDeviceModuleName ini setting in the platform’s Engine.ini file:
      • For PC, in WindowsEngine.ini set AudioDeviceModuleName=AudioMixerXAudio2
      • For MacOS, in MacEngine.ini set AudioDeviceModuleName=AudioMixerCoreAudio
      • For iOS, in IOSEngine.ini set AudioDeviceModuleName=AudioMixerAudioUnit
      • For Android, in AndroidEngine.ini set AudioDeviceModuleName=AudioMixerAndroid
    • The Steam Audio SDK has a fully-integrated implementation using the capabilities of the new Unreal Audio Engine. Click here for more details.
    • The new Synthesis Plugin contains two new real-time synthesizers written using the new Unreal Audio Engine’s “SynthComponent” class to implement a fully-featured subtractive synthesizer as well as a real-time granulator. It also contains a host of new DSP source and submix effects for use with the new Unreal Audio Engine.
  • Mobile Updates:
    • Android 23 (and above) now supports runtime permissions. Permissions are requested at runtime for the required ones used by the engine, and there is an Android Permission plugin to check for permission (Check Permission BP node), request it (AcquirePermissions BP node), and get the response (OnPermissionsGrantedDynamicDelegate). This is only necessary if Android Target SDK is set to 23 or above, otherwise any permissions in the Android Manifest are granted as usual.
      • 4.16 requires Android SDK 23 or higher to be installed. Please use the SDK manager to install it if you have an earlier install. Also, in Android SDK project settings, please change your Android SDK API Level from “matchndk” to “latest”. This will use the newest installed SDK found in your Android SDK platforms directory. There is no need to change the NDK API Level; “android-19” is correct to allow installing your APK on Android version prior to Lollipop (Android 5.0); setting this higher will require Android 5.0+.
    • Improved Virtual Keyboard support on Android (Experimental). A new option in Project Settings > Platforms > Android > APKPackaging enables the improved virtual keyboard. This replaces the dialog input box. When this option is enabled, users should bind event handlers to FGenericApplication::OnVirtualKeyboadShown and FGenericApplication::OnVirtualKeyboardHidden.
    • Mobile performance improvements for Slate and UI rendering. Set the console variable Slate.CacheRenderData=0 to enable the option for Invalidation Panels to cache only widget elements, which improves texture batching and reduces draw calls.
    • HTML5 support for WebAssembly (WASM). This feature comes from Mozilla's latest Emscripten toolchain (v1.37.9) and reduces app download size, startup times, memory consumption, and improves performance.
    • HTML5 support for WebGL 2. This provides better rendering performance, visual fidelity, and more rendering feature sets.
  • VR Updates:
    • Unified Console Commands across VR platforms. The previous code base did not share common interfaces across HDMs; now there is a shared layer that developers can work from rather than maintaining each platform individually. This provides several benefits such as easier bootstrapping of new platforms, consistent interfaces, and less redundancies in HMD implementations.
    • The mobile multiview path now supports GearVR. Mobile multiview is similar to instanced stereo on the desktop, and provides an optimized path for stereo rendering on the CPU. Enable this in your Project Settings under VR, and restart the editor to take effect.

 

The preview release is available via the Epic game launcher.

GameDev News

26. April 2017

 

Welcome back to our ongoing HaxeFlixel Tutorial Series, today we are going to cover playing sound and music in HaxeFlixel.  This title is a bit misleading, as they are both basically the same thing, they are all ultimately FlxSound objects.  The single biggest challenge you are going to have with HaxeFlixel audio is file formats, so let’s start there. 

The file format of your audio file depends entirely on the platform your code is going to run on.  On the Flash player platform, all of your audio files need to be in mp3 format.  However on other platforms the mp3 file format has a number of licensing issues, so they instead use Ogg Vorbis (ogg) files for long audio files, and WAV for shorter effects.  Why two different formats?  WAV files are much faster to load, but take up more system memory.  They are ideal for frequently used but shorter sound files.  Ogg files on the other hand are much more compressed, taking more CPU power to process but a great deal less space.  This makes them ideal for longer audio files, such as your game’s music.  In addition to the file format limitations, your sounds need to be encoded at 11025, 22050 or 44100kHz frequency.  If you need to switch encoding frequencies, Audacity is a great free program (you can learn more about here) that can also be used to convert between the various file formats.

Using a default project, HaxeFlixel will automatically create sound and music folders in your asset directory.  One thing you will notice in this example is when you start including multiple different audio files for different platforms, the warnings can become extremely irritating.  Hearing that your mp3 file isn't compatible with your platform, or that your WAV file wont work on Flash gets irritating quickly.  Thankfully we have the ability to filter our files based on platform.  In your Project.xml file, locate the Path Settings area and edit it like so:

   <!-- _____________________________ Path Settings ____________________________ -->

   <set name="BUILD_DIR" value="export" />
   <classpath name="source" />
   <!-- <assets path="assets" /> -->
   <assets path="assets/music" include="*.mp3" if="flash" />
   <assets path="assets/music" include="*.ogg|*.wav" unless="flash" />
   <assets path="assets/sounds" include="*.mp3" if="flash" />
   <assets path="assets/sounds" include="*.ogg|*.wav" unless="flash" />

This will cause Flash builds to only see mp3 files from the music and sound directories, while all other platforms will only see the ogg and wav format files.  Ok, now lets move on to some actual code.

Playing music and sound effects is trivial in HaxeFlixel, but once again platform becomes a factor.  In this case we are using conditional compilation to solve this problem, like so:

#if flash
   FlxG.sound.playMusic(AssetPaths.techno__mp3);   
   soundEffect = FlxG.sound.load(AssetPaths.gunshot__mp3);
#else
   FlxG.sound.playMusic(AssetPaths.techno__ogg);
   soundEffect = FlxG.sound.load(AssetPaths.gunshot__wav);  
#end

 

In this case the music file will play automatically, while the soundEffect needs to be triggered manually.  Let’s take a look at how to do that in our update() method.

if(FlxG.keys.justPressed.G)
   soundEffect.play(true);

The true parameter causes the sound effect to replace any running instances.  This will result in the sound starting from the beginning if you press the G key multiple times before a sound effect has finished playing. 

You also have a fair bit of control over sound playback.  The following code shows how to pause/resume sound as well as increasing and decreasing the volume.  Volume is a value that ranges from 0.0 to 1.0, with 0.0 being complete silence while 1.0 is maximum volume.

if(FlxG.keys.justPressed.P)
   if(FlxG.sound.music.active)
      FlxG.sound.music.pause();
   else
      FlxG.sound.music.resume();
if(FlxG.keys.justPressed.PLUS)
   FlxG.sound.changeVolume(0.1);
if(FlxG.keys.justPressed.MINUS)
   FlxG.sound.changeVolume(-0.1);

 

HaxeFlixel also supports positional audio.  Both by changing the X and Y position of a sound effect, or positioning it relative to another object.  The latter is the approach we are going to take here, creating a virtual ear that is centered to the screen.

soundX = FlxG.width/2;  
soundY = FlxG.height/2;
ear = new FlxObject();
ear.setPosition(FlxG.width/2, FlxG.height/2);   

Now we can relocate the sound relative to the ear and it will change accordingly.

 

There is also

if(FlxG.keys.justPressed.LEFT)
   soundX -= 25;
if(FlxG.keys.justPressed.RIGHT)
   soundX += 25;        
if(FlxG.keys.justPressed.UP)
   soundY -= 25;
if(FlxG.keys.justPressed.DOWN)
   soundY += 25;                 
FlxG.sound.music.proximity(soundX,soundY, ear,300,true);

It is also possible to tween audio, fading it in and out over time, like so:

if(FlxG.keys.justPressed.S)
   FlxG.sound.music.fadeOut(3.0);

You can also set up callback functions when a sound stops playing.  This can be used to easily create a music or sound effect management system.

FlxG.sound.music.onComplete = function() {
   // This will only ever fire if you hit L to turn looping off
   trace("Song ended");
}

 

Now let’s show the complete source listing, which also illustrates a couple other features such as looping, muting and pausing audio.

package;

import flixel.FlxG;
import flixel.FlxSprite;
import flixel.FlxState;
import flixel.system.FlxSound;
import flixel.FlxObject;

class PlayState extends FlxState
{
   var soundEffect:FlxSound;
   var soundX:Float;
   var soundY:Float;
   var ear:FlxObject;

   override public function create():Void
   {
      super.create();
      #if flash
         FlxG.sound.playMusic(AssetPaths.techno__mp3);   
         soundEffect = FlxG.sound.load(AssetPaths.gunshot__mp3);
      #else
         FlxG.sound.playMusic(AssetPaths.techno__ogg);
         soundEffect = FlxG.sound.load(AssetPaths.gunshot__wav);  
      #end

         soundX = FlxG.width/2;  
         soundY = FlxG.height/2;
         ear = new FlxObject();
         ear.setPosition(FlxG.width/2, FlxG.height/2);   

      FlxG.sound.music.onComplete = function() {
         // This will only ever fire if you hit L to turn looping off
         trace("Song ended");
      }

      FlxG.sound.muted = true;
   }

   override public function update(elapsed:Float):Void
   {
      super.update(elapsed);

if(FlxG.keys.justPressed.G)
   soundEffect.play(true);

      if(FlxG.keys.justPressed.P)
         if(FlxG.sound.music.active)
            FlxG.sound.music.pause();
         else
            FlxG.sound.music.resume();
      if(FlxG.keys.justPressed.PLUS)
         FlxG.sound.changeVolume(0.1);
      if(FlxG.keys.justPressed.MINUS)
         FlxG.sound.changeVolume(-0.1);

      if(FlxG.keys.justPressed.LEFT)
         soundX -= 25;
      if(FlxG.keys.justPressed.RIGHT)
         soundX += 25;        
      if(FlxG.keys.justPressed.UP)
         soundY -= 25;
      if(FlxG.keys.justPressed.DOWN)
         soundY += 25;                 
      FlxG.sound.music.proximity(soundX,soundY, ear,300,true);

      
      if(FlxG.keys.justPressed.L)
         FlxG.sound.music.looped = false;

      if(FlxG.keys.justPressed.S)
         FlxG.sound.music.fadeOut(3.0);

      if(FlxG.keys.justPressed.M)
         FlxG.sound.muted = !FlxG.sound.muted;
   }
}

 

The Video

Programming , , ,

25. April 2017

 

In beta for a couple of years now, Silicon Studios have just released Xenko Game Engine.  If you are interested in learning more, we did a complete tutorial series back when it was known as Paradox 3D.  As part of the release pricing information has finally been announced.

XenkoPricing

Until July 31st, the Pro version will be available for free.  Xenko is a cross platform 2D/3D game engine with an editor and full Visual Studio integration.  The primary language is C#.  The personal release requires a splashscreen and has a $200K USD revenue limit.

We did a hands on video detailing the new release below and embedded below.

GameDev News

24. April 2017

 

Cocos2D-x 3.15 was released today.  Cocos2D-x is a cross platform 2D game framework written in C++.  If you want to learn Cocos2D-x we have a complete tutorial series available here.  This new release updates a number of the underlying dependencies as well as adding support for Android Studio, Google’s version of IntelliJ IDEA which recently received C++ support.

Details from the release notes:

Highlights

  • full Android Studio supports: include editing, compiling and debugging c++ codes: doc
  • audio engine uses tremolo and MP3 Decoder Library to decode audio files on Android: high performance and more adaptable to different Android devices
  • WebSockets and SocketIO supports SSL
  • AssetsManagerEx is more stable
  • update Spine runtime to v3.5.35
  • update flatbuffer to v1.5
  • remove support for Windows 8.1 store and phone
  • update OpenSSL to v1.1.0
  • remove linux 32-bit support

 

Cocos2d-x is available for download here.

GameDev News

23. April 2017

 

Ogre3D is a popular C++ based open source 3D renderer and scene graph.  It has been used to make several games including Torchlight 1 and 2, Dungeons, Ankh 2/3 and more.  The 1.10 release adds several new features including the ability to target the browser via Emscripten, a better build system, new documentation, better renderers and more.

Details of the release:

  • Python bindings as a component
  • vastly improved GL3+/ GLES2 renderers with GL3+ now being the recommended choice on *nix systems
  • Bites Component for rapid prototyping of applications
  • Emscripten platform targetsupporting Web Assembly & WebGL2
  • improved build system, automatically fetching all required dependencies.
  • A new HLMS Component implementing physically based shading
  • Unified Documentation: the API docs, the manual and some Wiki pages have been merged and are now managed with Doxygen. As a consequence, the Wiki is outdated when it comes to OGRE 1.10. If you find something particularly missing, feel free to submit an additional tutorial.

Despite the amount of new features OGRE 1.10 provides the smoothest upgrade experience between OGRE releases so far. See the API/ ABI change overview for OGRE 1.7 – 1.10that is kindly provided by ABI-laboratory.
Note that some components are marked as [BETA]. This does not mean that they are likely to crash, but that we can not give any API stability guarantees for them right now. You should expect their API to change without a deprecation period while we we iron the warts out as the Component get more exposure.

In turn for the core components, our deprecation list has grown considerably. You can keep using these APIs for now, as we intend to support them until OGRE 1.11. Speaking of which; to make OGRE releases predictable, we will switch away to a feature based to a time based release model for the 1.x branch. This means that you can expect OGRE 1.11 in April 2018.

 

You can read more about the release here.

GameDev News

Month List

Popular Comments