Subscribe to GameFromScratch on YouTube Support GameFromScratch on Patreon
5. October 2015

 

Welcome to the next section in the ongoing Closer Look at series, a series of guides aimed at helping you decide which game engine is right for you.  Each closer look at entry is a cross between a review and a getting started tutorial that should give you a good idea if a game engine is a good fit for you.  Today we areCloserLook going to be looking at the Wave Engine.  The Wave Engine started life in 2011 and is a C# based 2D/3D engine that runs on Mac, Linux and Windows and is capable of targeting all of those plus iOS, Android and Windows Mobile, although a Xamarin license may be required for some mobile targets.  There is no cost to develop using the current version of Wave Engine, however you are required to show a splash screen on application start.

 

As always there is an HD video version of this guide available here.

 

Meet Wave Engine

 

With the release of version 2, Wave Engine now includes a level editing tool to help you quickly compose scenes.  Judging by the number of “Wave Engine vs Unity” or “Wave Engine vs Unreal Engine” searches on Google, this was a heavily requested feature.  Do be aware though that the editor is quite young, and it certainly shows sometimes.

 

As mentioned earlier, Wave Engine is C# based game engine and games can be written entirely in Visual Studio or MonoDevelop/Xamarin Sutdio if you prefer.  The editor however can make like a great deal easier and integration between your code and the editor is well done and makes heavy use of reflection.  This makes your requirement to a editing support to your own classes extremely easy as we will see shortly.

 

As is quickly becoming the norm, Wave Engine is built around a entity/component model.  Your scene is a series of entities, and those entities are in turn composed of components, behaviours and drawables (renderers).  These can be composed using either the editor or entirely in code.  Let’s take a look at the editor.

 

Wave Visual Editor

 

image

 

Let’s take a quick tour around the editor.

 

Asset Management

image

This is where the various assets ( textures, models, materials, scenes ) that make up your game are managed.  You can often, but not always, create an instance of an asset by dragging it on to the viewport.  Options here are fairly simple, mostly limited to renaming and deleting assets.  A bit of a warning, I ran into some issues when deleting assets, especially when I deleted an asset then added a new version with the same name.  Hopefully these behaviours will be ironed out in time.  Not that a scene, which is where you assemble your various assets for display in the game, is in fact an asset itself.

 

Depending on the type of asset, double clicking it will either invoke a built in tool, such as the material editor or 3D model viewer, or invoke the native application if no behaviour is defined.

 

Console

image

This is where your various levels of debug information are displayed.  You can filter between errors, warnings, informational and debug logs and of course you can clear it all out.

 

Scene Graph

image

 

The Entity hierarchy is the scene graph of your game scene.  Each scene has it’s own hierarchy.  All the assets in the hierarchy are entities which in turn are holders for various components.  Entities can be parented to other entities and will inherit their transforms.  There are several kinds of entities that can be created, depending on if you are in 2D or 3D mode.  To create an entity simply click the + icon.

The 3D entities:

image

 

The 2D Entities:

image

 

You can effectively built any of the pre-existing entity types by using an empty entity and adding all of the same components and behaviors.  Speaking of which, with an entity selected you can then access the Entity details, where you configure components and add more.  Here for example is the 3D camera’s details:

image

 

New components or behaviours can be added using the + icon.  For visible objects such as models or sprites, a Drawable component also needs to be added.  Here for example is the process of adding a film grain to our camera:

image

 

Select Components, then FilmGrainLens, then OK:

image

 

You should now be able to configure the amount of film grain in the Entity Details panel:

image

 

Viewport

image

 

This is the area where you place entities within your scene.  It is also the aspect of Wave Engine editor that by far needs the most work.  When an object is selected ( using the Entity Hierarchy, direct selection rarely works ) a widget appears for translation, scaling or rotation depending on the settings in the toolbar at the top.  In the top right corner is a widget enabling you to switch between the different camera views.  If your camera is selected you can see the camera’s viewing frustum as well as a preview of what the camera sees, like the image shown above.  You can zoom using the scroll wheel, pan with middle mouse button and rotate the view using the right mouse button.  There does not appear to be an orbit ability, which is frustrating.

 

The major current flaws are the selection mechanics ( left click works sometimes ), manipulators dont work if you click on the arrow portion, there is only one view at a time and selecting and position a camera is a rage inspiring prospect.  The single viewport with poor viewport controls makes positioning entities far more frustrating than it should be.  In fact, I tend to position entities using direct numeric entry because the viewport is so frustrating to use.  Hopefully these areas are improved soon, as all are fairly easy things to fix.

 

Toolbar

A fair bit of functionality is packed into the toolbar, as shown below:

image

 

Tools and Editors

 

As mentioned earlier, there are a few tools built directly in to Wave Editor.  Some of these are invoked by selected an asset in the asset viewer.  Others are invoked when creating new assets in the Assets menu:

 

image

 

Model Viewer

image

 

Material Editor

image

 

SpriteSheet Tool

image

Create Font

image

 

Coding in Wave Engine

 

So we’ve looked at the editor in some depth, now let’s take a look at the coding experience.  As the editor is actually a fairly recent addition, Wave Engine can obviously has pretty solid code support.  Let’s take a look at the process of creating a camera and adding film grain to it like we did earlier in the editor.  The results of the Editor are two fold, a series of file formats that the engine can understand and a CSharp project that you can edit.  You can load the solution from the File menu:

image

 

Your project should look like this:

image

 

The first project is the bootstrap for your game, it’s the second project where your code goes.

Game.cs

#region Using Statements
using System;
using WaveEngine.Common;
using WaveEngine.Common.Graphics;
using WaveEngine.Framework;
using WaveEngine.Framework.Services;
#endregion

namespace ProgrammaticCamera
{
    public class Game : WaveEngine.Framework.Game
    {
        public override void Initialize(IApplication application)
        {
            base.Initialize(application);

      ScreenContext screenContext = new ScreenContext(new MyScene()); 
      WaveServices.ScreenContextManager.To(screenContext);
        }
    }
}

 

MyScene.cs

#region Using Statements
using System;
using WaveEngine.Common;
using WaveEngine.Common.Graphics;
using WaveEngine.Common.Math;
using WaveEngine.Components.Cameras;
using WaveEngine.Components.Graphics2D;
using WaveEngine.Components.Graphics3D;
using WaveEngine.Framework;
using WaveEngine.Framework.Graphics;
using WaveEngine.Framework.Resources;
using WaveEngine.Framework.Services;
#endregion

namespace ProgrammaticCamera
{
    public class MyScene : Scene
    {
        protected override void CreateScene()
        {
            this.Load(WaveContent.Scenes.MyScene);           
        }
    }
}

 

WaveContent.cs

//------------------------------------------------------------------------------
// <auto-generated>
//     This code was generated by a tool.
//     Runtime Version:4.0.30319.42000
//
//     Changes to this file may cause incorrect behavior and will be lost if
//     the code is regenerated.
// </auto-generated>
//------------------------------------------------------------------------------

// File generated on 2015-10-04 12:43:05 PM
namespace ProgrammaticCamera
{
    using System;
    
    
    public sealed class WaveContent
    {
        
        public sealed class Scenes
        {
            
            /// <summary> Path to Content/Scenes/MyScene.wscene </summary>
            public const string MyScene = "Content/Scenes/MyScene.wscene";
        }
    }
}

 

Game.cs creates your scene, MyScene.cs is your scene and WaveContent.cs is a system generated file for mapping resources in the editor into code friendly variables.  MyScene.cs is where your logic will go.  Let’s do a simple example that creates a red spinning cube entirely procedurally.

 

#region Using Statements
using System;
using WaveEngine.Common;
using WaveEngine.Common.Graphics;
using WaveEngine.Common.Math;
using WaveEngine.Components.Cameras;
using WaveEngine.Components.Graphics2D;
using WaveEngine.Components.Graphics3D;
using WaveEngine.Framework;
using WaveEngine.Framework.Graphics;
using WaveEngine.Framework.Resources;
using WaveEngine.Framework.Services;
using WaveEngine.ImageEffects;
using WaveEngine.Materials;
#endregion

namespace ProgrammaticCamera
{
    public class MyScene : Scene
    {
        protected override void CreateScene()
        {
            this.Load(WaveContent.Scenes.MyScene);  
         
            //Create a new red material
            WaveEngine.Materials.StandardMaterial material = new 
                                                             StandardMaterial(
                                                             Color.Red,
                                                             DefaultLayers.
                                                             Opaque);

            //Create our new entity with the name "cube"
            Entity entity = new Entity("cube");

            // AddComponent is chainable.  Add a Transform3D for position, 
            ModelRenderer so it renders
            // and a MaterialMap containing our material
            entity.AddComponent(Model.CreateCube())
                .AddComponent(new Transform3D())
                .AddComponent(new ModelRenderer())
                .AddComponent(new MaterialsMap(material));

            // Accessing components is easy.  Let's position at 0,0,0, even 
            though this is actually the default
            entity.FindComponent<Transform3D>().Position = new Vector3(0f, 0f, 
                                              0f);

            // We can also add behaviors to our entity.  Let's make it spin
            entity.AddComponent(new Spinner());

            // You can also get components by type, the true/false is wether the 
            type needs to be an exact match
            var spinner = (Spinner)entity.FindComponent(typeof(Spinner), true);
            // Spin on the Y axis
            spinner.IncreaseY = 5f;

            // Finally add our component to the active scene using EntityManager 
            global
            EntityManager.Add(entity);
        }
    }
}

 

And when run:

WaveEd1

 

The code is pretty heavily commented, so I wont bother with much explanation.  Keep in mind that code could actually be a great deal shorter.  Things were done the way they are for demonstration purposes only.  The entire thing could have been defined in a single chain of AddComponent calls.

 

Next let’s take a look at a 2D example that shows how you can respond to input and more tightly integrate into the editor.  We are going to add a class that can be used to control a 2D player sprite, such as the players character in a 2D game.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Runtime.Serialization;
using System.Text;
using WaveEngine.Common.Graphics;
using WaveEngine.Common.Input;
using WaveEngine.Components.Animation;
using WaveEngine.Components.Graphics2D;
using WaveEngine.Framework;
using WaveEngine.Framework.Graphics;
using WaveEngine.Framework.Services;

namespace SpriteTest
{
    [DataContract]
    class MyPlayer : Behavior
    {
        [RequiredComponent]
        public Transform2D transform2D;

        [RequiredComponent]
        public SpriteAtlas spriteAtlas;

        [RequiredComponent]
        public Animation2D animation2D;

        [RequiredComponent]
        public SpriteAtlasRenderer spriteAtlasRender;

        public MyPlayer()
            : base("MyPlayer")
        {
            transform2D = null;
            spriteAtlas = null;
            animation2D = null;
            spriteAtlasRender = new SpriteAtlasRenderer();
        }

        protected override void Initialize()
        {
            base.Initialize();
        }

        protected override void Update(TimeSpan gameTime)
        {
            if(WaveServices.Input.KeyboardState.Left == ButtonState.Pressed)
                transform2D.X -= (float)(100.0 * gameTime.TotalSeconds);
            if (WaveServices.Input.KeyboardState.Right == ButtonState.Pressed)
                transform2D.X += (float)(100.0 * gameTime.TotalSeconds);
        }

    }
}

 

Now in Wave Editor when you add a new Entity you should see your type as an option as a component that can be added to an entity.

image

 

As you can see, the types we added and marked as [RequiredComponent] are exposed to the editor.  I define a sprite sheet for our player (see video for the complete process), then when you run the game:

waved4

Pretty cool.

 

Engine Features

 

Of course, we can only go into a very small subset of the actual engine functionality in this tutorial, so let’s take a quick look at the marketing feature set:

 

Wave Engine is a C# component-based modern game engine which allows you to create cross-platform games and apps for many platforms: Windows, Linux, MacOS, iOS, Android, Windows Store and Windows Phone.

Wave Engine supports the same set of features in all its platforms to make it easy to port all your projects to other supported platforms.

You also have a great set of extensions with the complete code on github, to use WaveEngine with Kinect, OculusRift, Vuforia, Cardboard, LeapMotion and much more.

 

There is also a product roadmap available here for some direction on where Wave Engine development is headed.

 

Documentation and Community

 

At first glance Wave Engine has extremely good documentation.  After closer inspection this proves to be untrue, in fact, this is certainly an area that Wave Engine needs to improve.  For example I couldn’t find a single comprehensive link on the 3D export process ( supported formats, process involved, etc ) an area that most certainly should be well documented.

 

There is a wiki that serves as the getting started guide and tutorial in one.  Again at first glance it looks pretty complete, but when you start opening up entries and see that 75% of them are just stubs you know there is an issue.  The generated reference material also at first glance looks very good, but once you start diving in you realize that there is almost no descriptive text in any of the entries.  This is extremely disappointing… I can live without tutorials/manual when getting started, but a lack of reference is a painful pill to swallow.  There is also a technical blog that covers a mismatch of subjects.

 

There is a community forum that seems fairly active.  In researching the product Google led me to far too many questions without answers however.  Given the state of the documentation, this makes things even more frustrating.  If you aren’t the type to figure things out yourself, this will be a frustrating experience.  The design is straightforward enough that you can intuit most steps, but when you run into a wall, it hurts, a lot.

 

Fortunately there are several starter kits hosted on Github.  It is through this source code that you will mostly figure out how to work with Wave Engine.

 

Books

 

There are currently no books covering Wave Engine development.  Will update here if that changes.

 

Summary

 

Like the Paradox Game Engine, WaveEngine provides a solid code focused cross platform C# based alternative to Unity.  The design is clean, the code is intuitive and the examples are quite good.  The new editor is a step in the right direction and should certainly help with productivity, but does need a layer of polish.  The documentation certainly needs more attention too, especially if this engine is going to be used by newer developers.  The kernel here though shows a great deal of promise and the direction they are going is certainly the right one.  The api itself is quite large in scope, containing most of the functionality you will need.

 

One thing I don’t really understand, nor could I find a good source on, is the business model here.  Besides the splash screen, there are no real costs or requirements when using this engine.  There is currently no value add or upsell version, so I don’t understand where the company is making their money.  For non-open source projects, that is always a huge question mark in my head when evaluating an engine.  Companies ultimately need to make money to continue development, so this is certainly a huge question mark.  So they either use this engine for their own development and release it to others for altruistic reasons, or something is going to change.

 

If you like what you saw today though, I certainly recommend you check out Wave Engine.  With no cost and a minimal download, you have very little to lose!

 

The Video

Programming


24. August 2015

 

In this tutorial in our ongoing Paradox3d Game Engine tutorial series  we are going to look at controlling a Paradox game engine scene programmatically.  This includes accessing entities created in the editor, creating new entities, loading assets and more.  It should give you a better idea of the relationship between the scene and your code.

 

As always there is an HD video available here.

 

Creating a Simple Script

 

As part of this process we are going to be attaching a script to a scene entity programmatically.  First we need that script to be created.  We covered this process back in this tutorial if you need a brush up.  We are going to create an extremely simple script named BackAndForth.cs, which simply moves the entity back and forth along the x-axis in our scene.  Here is the contents of the script:

using System;
using SiliconStudio.Paradox.Engine;

namespace SceneSelect
{
    public class BackAndForth : SyncScript
    {
        private float currentX = 0f;
        private const float MAX_X = 5f;
        bool goRight = false;
        public override void Update()
        {
            if (Game.IsRunning)
            {
                if (goRight)
                {
                    currentX += 0.1f; 
                }
                else
                {
                    currentX -= 0.1f;
                }

                if (Math.Abs(currentX) > MAX_X)
                    goRight = !goRight;

                Entity.Transform.Position.X = currentX;
            }
        }
    }
}

 

If you've gone through the previous tutorials, this script should require no explanation.  We simply needed an example script that we can use later on.  This one merely moves the attached entity back and forth across the X axis until it reaches + or – MAX_X.

 

Now what we want to do is attach this script to the Sphere entity created in the default scene.  This means we are going to need to be able to locate an entity in code, and perhaps more importantly, we need some code to run.  We could create our own custom Game class like we did last tutorial, but this time we are going to do things a bit different.  Instead we are going to create a StartupScript.

 

First we new to create a new empty Entity in our scene to attach the script component to.  I called mine Config:

image


Next we create the Script we are going to attach.  Start with the following extremely simple script, Startup.cs:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using SiliconStudio.Paradox.Engine;
using SiliconStudio.Paradox.Rendering;

namespace SceneSelect
{
    public class Startup : StartupScript
    {
        public override void Start()
        {
            base.Start();
        }
    }
}

 

A StartupScript is a type of script that is loaded, as you may guess, on start. Unlike the Sync/AsyncScript classes we used earlier, there is no per frame update callback occurring.  This makes StartupScripts very useful for exactly this type of configuration tasks.

 

Now that we have our script, let’s attach it to our entity:

image

 

Finding an Entity using Code

 

First we are going to look at the process of locating an entity creating in Paradox Studio using code.  The following code will select the Sphere from the default scene using LINQ.

    var sphere = (from entities in this.SceneSystem.SceneInstance
                    where entities.Components.ContainsKey(ModelComponent.Key)
                    select entities).FirstOrDefault();

You can get the currently active scene using SceneSystem.SceneInstance, which contains a simple collection of Entity objects.  We then filter by entities with Components of type ModelComponent.  There are many ways we could have accomplished the same thing.  This query actually returns all entities in the scene that have a ModelComponent attached, which is overkill.  We could also select by the entities Name attribute:

image

Using the code:

    var sphere = (from entities in this.SceneSystem.SceneInstance
                    where entities.Name == "Sphere" select entities).
                                         FirstOrDefault();
    if (sphere == null) return;

 

Attaching a Script Component Programmatically

 

    ScriptComponent scriptComponent = new ScriptComponent();
    scriptComponent.Scripts.Add(new BackAndForth());
    sphere.Components.Add<ScriptComponent>(ScriptComponent.Key, scriptComponent);

 

Now that we have a reference to our sphere Entity, adding a new component is pretty simple.  Remember that the ScriptComponent is a collection of Script objects.  Simply Add() an instance of our newly created BackAndForth script.  Finally attach a ScriptComponent to our Sphere’s Components collection. 

 

When we run this code we will see:

BackAndForth

 

Creating a new Entity

 

We can also create another entity programmatically.

    Entity entity = new Entity(position: new SiliconStudio.Core.Mathematics.
                    Vector3(0, 0, 1), name: "MyEntity");
    var model = (SiliconStudio.Paradox.Rendering.Model)Asset.Get(typeof(
                SiliconStudio.Paradox.Rendering.Model), "Sphere");
    ModelComponent modelComponent = new ModelComponent(model);
    entity.Add<ModelComponent>(ModelComponent.Key, modelComponent);

 

Here we create a new entity with the name “MyEntity” and set it’s location to (0,0,1).  Next we get a reference to the ProceduralModel created in Paradox Studio, with a call to Asset.Get() specifying the type and URL ( you can see the Url value by mouse overing the asset in the Asset Viewer panel in Studio).  Now we create a new ModelComponent using this Model.  (Keep in mind, changes to it the Model will affect all instances, as I will show momentarily).  Finally we add the ModelComponent to the entity.

Finally we add our newly created entity to the scene using:

    SceneSystem.SceneInstance.Scene.AddChild(entity);

Now when we run the code:

BackAndForth2

 

As I mentioned earlier, changes to the Model will affect all instances.  For example, let’s say we create a new Material in the editor and apply it to the model.

image

Now the code:

    Entity entity = new Entity(position: new SiliconStudio.Core.Mathematics.
                    Vector3(0, 0, 1), name: "MyEntity");
    var model = (SiliconStudio.Paradox.Rendering.Model)Asset.Get(typeof(
                SiliconStudio.Paradox.Rendering.Model), "Sphere");
    var material = Asset.Load<Material>("MyMaterial");
    model.Materials.Clear();
    model.Materials.Add(new MaterialInstance(material));
    ModelComponent modelComponent = new ModelComponent(model);
    entity.Add<ModelComponent>(ModelComponent.Key, modelComponent);

And the (non-animated) result:

image

As you can see, the material on all of the Spheres has been replaced.  If you do not want this behaviour, you will have to create a new Model, either in Studio or programmatically.

 

New Entity using Clone

 

We could have also created our entity using the Clone() method of our existing Entity.

    var anotherSphere = sphere.Clone();
    sphere.Transform.Position.Z = 1f;
    SceneSystem.SceneInstance.Scene.AddChild(anotherSphere);

Keep in mind, the close will get all of the components of the cloned Entity, so if we clone after we add the ScriptComponent, it will also have the script attached.

 

 

Our complete source example:

using System.Linq;
using SiliconStudio.Paradox.Engine;
using SiliconStudio.Paradox.Rendering;

namespace SceneSelect
{
    public class Startup : StartupScript
    {
        public override void Start()
        {
            base.Start();

            var sphere = (from entities in this.SceneSystem.SceneInstance
                         where entities.Components.ContainsKey(ModelComponent.
                                                               Key)
                         select entities).FirstOrDefault();
            //var sphere = (from entities in this.scenesystem.sceneinstance
            //              where entities.name == "sphere" select entities).
                                                 firstordefault();
            //if (sphere == null) return;

            ScriptComponent scriptComponent = new ScriptComponent();
            scriptComponent.Scripts.Add(new BackAndForth());
            sphere.Components.Add<ScriptComponent>(ScriptComponent.Key, 
                                                   scriptComponent);


            Entity entity = new Entity(position: new SiliconStudio.Core.
                            Mathematics.Vector3(0, 0, 1), name: "MyEntity");
            var model = (SiliconStudio.Paradox.Rendering.Model)Asset.Get(typeof(
                        SiliconStudio.Paradox.Rendering.Model), "Sphere");
            var material = Asset.Load<Material>("MyMaterial");
            model.Materials.Clear();
            model.Materials.Add(new MaterialInstance(material));
            ModelComponent modelComponent = new ModelComponent(model);
            entity.Add<ModelComponent>(ModelComponent.Key, modelComponent);

            SceneSystem.SceneInstance.Scene.AddChild(entity);

            var anotherSphere = sphere.Clone();
            sphere.Transform.Position.Z = 1f;
            SceneSystem.SceneInstance.Scene.AddChild(anotherSphere);
        }
    }
}

And, running:

BackAndForth3

 

The Video

 

Programming


20. August 2015

 

In this chapter we start looking at 3D game development using MonoGame.  Previously I called XNA a low level code focused engine and you are about to understand why.  If you come from a higher level game engine like Unity or even LibGDX you are about to be in for a shock.  Things you may take for granted in other engines/libraries, like cameras, are your responsibility in Monogame.  Don’t worry though, it’s not all that difficult.

 

This information is also available in HD Video.

 

This chapter is going to require some prior math experience, such as an understanding of Matrix mathematics.  Unfortunately teaching such concepts if far beyond the scope of what we can cover here without adding a few hundred more pages!  If you need to brush up on the underlying math, the Khan Academy is a very good place to start.  There are also a few books dedicated to teaching gamedev related math including 3D Math Primer for Graphics and Game Development and Mathematics for 3D Game Programming and Computer Graphics.  Don’t worry, Monogame/XNA provide the Matrix and Vector classes for you, but it’s good to understand when to use them and why.

 

Our First 3D Application

 

This might be one of those topics that’s easier explained by seeing.  So let’s jump right in with an example and follow it up with explanation.  This example creates then displays a simple triangle about the origin, then creates a user controlled camera that can orbit and zoom in/out on said triangle.

using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Input;

namespace Test3D
{

    public class Test3DDemo : Game
    {
        GraphicsDeviceManager graphics;
        SpriteBatch spriteBatch;

        //Camera
        Vector3 camTarget;
        Vector3 camPosition;
        Matrix projectionMatrix;
        Matrix viewMatrix;
        Matrix worldMatrix;

        //BasicEffect for rendering
        BasicEffect basicEffect;

        //Geometric info
        VertexPositionColor[] triangleVertices;
        VertexBuffer vertexBuffer;

        //Orbit
        bool orbit = false;

        public Test3DDemo()
        {
            graphics = new GraphicsDeviceManager(this);
            Content.RootDirectory = "Content";
        }

        protected override void Initialize()
        {
            base.Initialize();

            //Setup Camera
            camTarget = new Vector3(0f, 0f, 0f);
            camPosition = new Vector3(0f, 0f, -100f);
            projectionMatrix = Matrix.CreatePerspectiveFieldOfView(
                               MathHelper.ToRadians(45f), 
                               GraphicsDevice.DisplayMode.AspectRatio,
                1f, 1000f);
            viewMatrix = Matrix.CreateLookAt(camPosition, camTarget, 
                         new Vector3(0f, 1f, 0f));// Y up
            worldMatrix = Matrix.CreateWorld(camTarget, Vector3.
                          Forward, Vector3.Up);

            //BasicEffect
            basicEffect = new BasicEffect(GraphicsDevice);
            basicEffect.Alpha = 1f;

            // Want to see the colors of the vertices, this needs to 
            be on
            basicEffect.VertexColorEnabled = true;

            //Lighting requires normal information which 
            VertexPositionColor does not have
            //If you want to use lighting and VPC you need to create a 
            custom def
            basicEffect.LightingEnabled = false;

            //Geometry  - a simple triangle about the origin
            triangleVertices = new VertexPositionColor[3];
            triangleVertices[0] = new VertexPositionColor(new Vector3(
                                  0, 20, 0), Color.Red);
            triangleVertices[1] = new VertexPositionColor(new Vector3(-
                                  20, -20, 0), Color.Green);
            triangleVertices[2] = new VertexPositionColor(new Vector3(
                                  20, -20, 0), Color.Blue);

            //Vert buffer
            vertexBuffer = new VertexBuffer(GraphicsDevice, typeof(
                           VertexPositionColor), 3, BufferUsage.
                           WriteOnly);
            vertexBuffer.SetData<VertexPositionColor>(triangleVertices)
                                                      ;
        }

        protected override void LoadContent()
        {
            spriteBatch = new SpriteBatch(GraphicsDevice);
        }

        protected override void UnloadContent()
        {
        }

        protected override void Update(GameTime gameTime)
        {
            if (GamePad.GetState(PlayerIndex.One).Buttons.Back == 
                ButtonState.Pressed || Keyboard.GetState().IsKeyDown(
                Keys.Escape))
                Exit();

            if (Keyboard.GetState().IsKeyDown(Keys.Left))
            {
                camPosition.X -= 1f;
                camTarget.X -= 1f;
            }
            if (Keyboard.GetState().IsKeyDown(Keys.Right))
            {
                camPosition.X += 1f;
                camTarget.X += 1f;
            }
            if (Keyboard.GetState().IsKeyDown(Keys.Up))
            {
                camPosition.Y -= 1f;
                camTarget.Y -= 1f;
            }
            if (Keyboard.GetState().IsKeyDown(Keys.Down))
            {
                camPosition.Y += 1f;
                camTarget.Y += 1f;
            }
            if(Keyboard.GetState().IsKeyDown(Keys.OemPlus))
            {
                camPosition.Z += 1f;
            }
            if (Keyboard.GetState().IsKeyDown(Keys.OemMinus))
            {
                camPosition.Z -= 1f;
            }
            if (Keyboard.GetState().IsKeyDown(Keys.Space))
            {
                orbit = !orbit;
            }

            if (orbit)
            {
                Matrix rotationMatrix = Matrix.CreateRotationY(
                                        MathHelper.ToRadians(1f));
                camPosition = Vector3.Transform(camPosition, 
                              rotationMatrix);
            }
            viewMatrix = Matrix.CreateLookAt(camPosition, camTarget, 
                         Vector3.Up);
            base.Update(gameTime);
        }

        protected override void Draw(GameTime gameTime)
        {
            basicEffect.Projection = projectionMatrix;
            basicEffect.View = viewMatrix;
            basicEffect.World = worldMatrix;

            GraphicsDevice.Clear(Color.CornflowerBlue);
            GraphicsDevice.SetVertexBuffer(vertexBuffer);

            //Turn off culling so we see both sides of our rendered 
            triangle
            RasterizerState rasterizerState = new RasterizerState();
            rasterizerState.CullMode = CullMode.None;
            GraphicsDevice.RasterizerState = rasterizerState;

            foreach(EffectPass pass in basicEffect.CurrentTechnique.
                    Passes)
            {
                pass.Apply();
                GraphicsDevice.DrawPrimitives(PrimitiveType.
                                              TriangleList, 0, 3);
            }
            
            base.Draw(gameTime);
        }
    }
}

Alright… that’s a large code sample, but don’t worry, it’s not all that complicated.  At a top level what we do here is create a triangle oriented about the origin.  We then create a camera, offset –100 units along the z-axis but looking at the origin.  We then respond to keyboard, panning the camera in response to the arrow keys, zooming in and out in response to the plus and minus key and toggling orbit using the space bar.  Now let’s take a look at how we accomplish all of this.

 

First, when I said we create a camera, that is a misnomer, in fact we are creating three different Matrices (singular – Matrix), the View, Projection and World matrix.  These three matrices are combined to help position elements in your game world.  Let’s take a quick look at the function of each.

 

View Matrix  The View Matrix is used to transform coordinates from World to View space.  A much easier way to envision the View matrix is it represents the position and orientation of the camera.  It is created by passing in the camera location, where the camera is pointing and by specifying which axis represents “Up” in the universe.  XNA uses a Y-up orientation, which is important to be aware of when creating 3D models.  Blender by default treats Z as the up/down axis, while 3D Studio MAX uses the Y-axis as “Up”.

Projection Matrix The Projection Matrix is used to convert 3D view space to 2D.  In a nutshell, this is your actual camera lens and is created by specifying calling CreatePerspectiveFieldOfView() or CreateOrthographicFieldOfView().  With Orthographic projection, the size of things remain the same regardless to their “depth” within the scene.  For Perspective rendering it simulates the way an eye works, by rendering things smaller as they get further away.  As a general rule, for a 2D game you use Orthographic, while in 3D you use Perspective projection.  When creating a Perspective view we specify the field of view ( think of this as the degrees of visibility from the center of your eye view ), the aspect ratio ( the proportions between width and height of the display ), near and far plane ( minimum and maximum depth to render with camera… basically the range of the camera ).  These values all go together to calculate something called the view frustum, which can be thought of as a pyramid in 3D space representing what is currently available.

World Matrix The World matrix is used to position your entity within the scene.  Essentially this is your position in the 3D world.  In addition to positional information, the World matrix can also represent an objects orientation.

 

So nutshell way to think of it:

View Matrix –> Camera Location

Projection Matrix –> Camera Lens

World Matrix –> Object Position/Orientation in 3D Scene

 

By multiplying these three Matrices together we get the WorldProjView matrix, or a magic calculation that can turn a 3D object into pixels.

What value should I use for Field of View?

You may notice in this example I used a relatively small value of 45 degrees in this example.  What you may ask is the ideal setting for field of view?  Well, there isn’t one, although there are some commonly accepted values.  Human beings generally have a field of view of about 180 degrees, but this includes peripheral vision.  This means if you hold your hands straight out you should be able to just see them out of the edge of your vision.  Basically if its in front of you, you can see it.

However video games, at least not taking into account VR headset games, don’t really use the peripherals of your visual space.   Console games generally set of a field of view of about 60 degrees, while PC games often set the field of view higher, in the 80-100 degree range.  The difference is generally due to the size of the screen viewed and the distance from it.  The higher the field of view, the more of the scene that will be rendered on screen.

 

Next up we have the BasicEffect.  Remember how earlier we used a SpriteBatch to draw sprites on screen?  Well the BasicEffect is the 3D equivalent.  In reality it’s a wrapper over a HLSL shader responsible for rendering things to the screen.  Now HLSL coverage is way beyond the scope of what we can cover here, but basically it’s the instructions to the shader units on your graphic card telling how to render things.  Although I can’t go into a lot of details about how HLSL work, you are in luck, as Microsoft actually released the shader code used to create BasicEffect in the Stock Effect sample available at http://xbox.create.msdn.com/en-US/education/catalog/sample/stock_effects.  In order for BasicEffect to work it needs the View, Projection and Matrix matrixes specified, thankfully we just calculated all three of these.

 

Finally at the end of Intialize() we create an array of VertexPositionColor, which you can guess is a Vertex with positional and color data.  We then copy the triangle data to a VertexBuffer using a call to SetData().  You may be thinking to yourself… WOW, doesn’t XNA have simple primitives like this built in?  No, it doesn’t, although there are easy community examples you can download such as this one: http://xbox.create.msdn.com/en-US/education/catalog/sample/primitives_3d.

 

The logic in Update() is quite simple.  We check for input from the user and respond accordingly.  In the event of arrow keys being pressed, or +/- keys, we change the cameraPosition.  At the end of the update we then recalcuate the View matrix using our new Camera position.  Also in response to the space bar, we toggle orbiting the camera and if we are orbiting, we rotate the camera by another 1 degree relative to the origin.  Basically this shows how easy it is to update the camera by changing the viewMatrix.  Note the Projection Matrix generally isn’t updated after creation, unless the resolution changes.

 

Finally we come to our Draw() call.  Here we set the view, projection and world matrix of the BasicEffect, clear the screen, load our VertexBuffer into the GraphicsDevice calling SetVertexBuffer().  Next we create a RasterState object and set culling off.  We do this so we don’t cull back faces, which would result in our triangle from being invisible when we rotate behind it.  Often you actually want to cull back faces, no sense drawing vertices that aren’t visible!  Finally we load through each of the Techniques in the BasicEffect ( look at the BasicEffect.fx HLSL file and this will make a great deal more sense.  Otherwise stay tuned for when we cover custom shaders later on ), finally we draw our triangle data to screen by calling DrawPrimitives, in this case it’s a TriangleList.  There are other options such as lines and triangle strips, you are basically telling it what kind of data is in the VertexBuffer.

I’ll admit, compared to many other engines, that’s a heck of a lot of code to just draw a triangle on screen!  Reality is though, you generally write this code once and that’s it.  Or you work at a higher level, such as with 3D models imported using the content pipeline.

 

Loading and Displaying 3D Models

 

Next we take a look at the process of bringing a 3D model in from a 3D application, in this case Blender.  The process of creating such a model is well beyond the scope of this tutorial, although I have created a video showing the entire process available right here.  Or you can simply download the created COLLADA file and texture.

Which File Format works Best?


The MonoGame pipeline tool relies on an underlying library named Assimp for loading 3D models. You may wonder which if the many model formats supported should you use if exporting from Blender? FBX and COLLADA(dae) are the two most commonly used formats, while X and OBJ can often be used reliably with very simple non-animated meshes. That said, exporting from Blender is always a tricky prospect, and its a very good idea to use a viewer like the one included in the FBX Converter package to verify your exported model looks correct.

The above video also illustrates adding the model and texture using the content pipeline.  I won’t cover the process here as it works identically to when we used the content pipeline earlier.  Let’s jump right in to the code instead:

using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Input;

namespace Test3D
{

    public class Test3DDemo2 : Game
    {
        GraphicsDeviceManager graphics;
        SpriteBatch spriteBatch;

        //Camera
        Vector3 camTarget;
        Vector3 camPosition;
        Matrix projectionMatrix;
        Matrix viewMatrix;
        Matrix worldMatrix;

        //Geometric info
        Model model;

        //Orbit
        bool orbit = false;

        public Test3DDemo2()
        {
            graphics = new GraphicsDeviceManager(this);
            Content.RootDirectory = "Content";
        }

        protected override void Initialize()
        {
            base.Initialize();

            //Setup Camera
            camTarget = new Vector3(0f, 0f, 0f);
            camPosition = new Vector3(0f, 0f, -5);
            projectionMatrix = Matrix.CreatePerspectiveFieldOfView(
                               MathHelper.ToRadians(45f), graphics.
                               GraphicsDevice.Viewport.AspectRatio,
                1f, 1000f);
            viewMatrix = Matrix.CreateLookAt(camPosition, camTarget, 
                         new Vector3(0f, 1f, 0f));// Y up
            worldMatrix = Matrix.CreateWorld(camTarget, Vector3.
                          Forward, Vector3.Up);

            model = Content.Load<Model>("MonoCube");
        }

        protected override void LoadContent()
        {
            spriteBatch = new SpriteBatch(GraphicsDevice);
        }

        protected override void UnloadContent()
        {
        }

        protected override void Update(GameTime gameTime)
        {
            if (GamePad.GetState(PlayerIndex.One).Buttons.Back == 
                ButtonState.Pressed || Keyboard.GetState().IsKeyDown(
                Keys.Escape))
                Exit();

            if (Keyboard.GetState().IsKeyDown(Keys.Left))
            {
                camPosition.X -= 0.1f;
                camTarget.X -= 0.1f;
            }
            if (Keyboard.GetState().IsKeyDown(Keys.Right))
            {
                camPosition.X += 0.1f;
                camTarget.X += 0.1f;
            }
            if (Keyboard.GetState().IsKeyDown(Keys.Up))
            {
                camPosition.Y -= 0.1f;
                camTarget.Y -= 0.1f;
            }
            if (Keyboard.GetState().IsKeyDown(Keys.Down))
            {
                camPosition.Y += 0.1f;
                camTarget.Y += 0.1f;
            }
            if (Keyboard.GetState().IsKeyDown(Keys.OemPlus))
            {
                camPosition.Z += 0.1f;
            }
            if (Keyboard.GetState().IsKeyDown(Keys.OemMinus))
            {
                camPosition.Z -= 0.1f;
            }
            if (Keyboard.GetState().IsKeyDown(Keys.Space))
            {
                orbit = !orbit;
            }

            if (orbit)
            {
                Matrix rotationMatrix = Matrix.CreateRotationY(
                                        MathHelper.ToRadians(1f));
                camPosition = Vector3.Transform(camPosition, 
                              rotationMatrix);
            }
            viewMatrix = Matrix.CreateLookAt(camPosition, camTarget, 
                         Vector3.Up);
            base.Update(gameTime);
        }

        protected override void Draw(GameTime gameTime)
        {
            GraphicsDevice.Clear(Color.CornflowerBlue);

            foreach(ModelMesh mesh in model.Meshes)
            {
                foreach(BasicEffect effect in mesh.Effects)
                {
                    //effect.EnableDefaultLighting();
                    effect.AmbientLightColor = new Vector3(1f, 0, 0);
                    effect.View = viewMatrix;
                    effect.World = worldMatrix;
                    effect.Projection = projectionMatrix;
                }
                mesh.Draw();
            }
            base.Draw(gameTime);
        }
    }
}

It operates almost identically to when we created the triangle by hand, except that model is loaded using a call to Content.Load<Model>().  The other major difference is you no longer have to create a BasicEffect, one is automatically created for you as part of the import process and is stored in the Mesh’s Effects property.  Simply loop through each effect, setting up the View, Projection and World matrix values, then call Draw().  If you have a custom effect you wish to use instead of the generated Effects, you can follow the process documented here: https://msdn.microsoft.com/en-us/library/bb975391(v=xnagamestudio.31).aspx.

 

The Video

Programming


14. August 2015

 

In this part of the ongoing Paradox3D Game Engine tutorial series we are going to accomplish two tasks.  First we are going to show how to set the resolution of our game in Paradox Studio.  We will then look at an example of extending Game and implementing the same thing using code.  This will be a fairly short tutorial, but needed, as the process isn’t entirely intuitive.

 

As always, there is an HD video version of this tutorial available here.

 

Setting the Screen Resolution using Paradox Studio

 

The process of setting the resolution is incredibly easy, but certainly not intuitive.  To set the resolution, in Solution Explorer within Paradox Studio, right click the game package ( FullScreen in my case ), then select Package properties.

image

 

Then in the Property grid, set the width and height desired:

image 

And done.

 

Extending Game

 

Create a new class in your .Game project, I’m calling mine MyGame.cs.  Now enter the following code:

using SiliconStudio.Paradox.Engine;

namespace FullScreen
{
    public class MyGame : Game
    {
        protected override void Initialize()
        {
            // Set the window size to 720x480
            GraphicsDeviceManager.PreferredBackBufferWidth = 720;
            GraphicsDeviceManager.PreferredBackBufferHeight = 480;

            base.Initialize();
        }
    }
}

This code simply sets the resolution by using GraphicsDeviceManager to set the PreferredBackBufferWidth and Height to our desired dimensions.  Initialize is called after your applications constructor, but before a window is displayed, making it an ideal location to set the resolution.  Why preferred?  Well because frankly outside of desktop platforms (mobile), you often don’t have control over the window size.  Like the previous tutorial, it’s very important to remember to make your class public.

 

Please note, Initialize() is just one point in the application lifecycle, there are several other protected methods you can override to gain much more precise control over the lifecycle of your game:

image

 

Now that we have created our own custom game class, we now need to update the entry point for each target platform to create an instance of our new class instead of using Game.

image

 

Edit the ___App.cs file accordingly:

 

using SiliconStudio.Paradox.Engine;

namespace FullScreen
{
    class FullScreenApp
    {
        static void Main(string[] args)
        {
            using (var game = new FullScreen.MyGame())
            {
                game.Run();
            }
        }
    }
}

 

 

The Video

Programming


14. August 2015

 

In this part of the Paradox3D game engine tutorial series we are now going to look at how you actually program your games.  In the end you will discover that it’s actually a pretty straightforward process, but could certainly use some streamlining.  ( The option to generate a .cs file when you add a script component would be a nice little time saver… ).  Minor quibble however… let’s jump in.  The code in this particular example was written to target version 1.2.  If the code doesn’t work any more, be sure to check the comments for suggestions.  If there is no fix there, please email me.

 

As always, there is an HD video of this process available here or embedded below.

 

Creating a new Script

Scripting in Paradox is a two step process.  First you create the script, generally in Visual Studio.  Then you attach the script to an entity, either programmatically, or using the editor.  We are going to look at the process of creating the script first.

 

In Visual Studio, inside your .Game folder, create a new cs file.

image

 

I personally called mine ExampleScript, outside of standard variable naming requirements, the name really doesn't matter.  We now have two options as to how we want to implement our script.  It can either be a SyncScript or AsyncScript, we will show an example of both. 

A SyncScript as the name suggests, runs Syncronously.  That is, the game loop iterates over and over and we frame our script’s update function is called and we handle the logic of our script.  An AsyncScript on the other hand, takes advantage of C# 5’s async functionality, and allows your script to run in parallel.  This could lead to performance gains on multi processor machines.  Which works best is ultimately up to you and your game’s design.

SyncScript example:

using System;
using SiliconStudio.Paradox.Engine;

namespace ScriptingDemo
{
    public class ExampleScriptSync : SyncScript
    {
        public override void Update()
        {

            if (Game.IsRunning)
            {
                if (Input.IsKeyDown(SiliconStudio.Paradox.Input.Keys.Left))
                {
                    this.Entity.Transform.Position.X -= 0.1f;
                }
                if (Input.IsKeyDown(SiliconStudio.Paradox.Input.Keys.Right))
                {
                    this.Entity.Transform.Position.X += 0.1f;
                }
            }
        }
    }
}

 

AsyncScript example:

using System;
using System.Threading.Tasks;
using SiliconStudio.Paradox.Engine;

namespace ScriptingDemo
{
    public class ExampleScriptAsync : AsyncScript
    {
        public override async Task Execute()
        {
            while (Game.IsRunning)
            {
                await Script.NextFrame();

                if (Input.IsKeyDown(SiliconStudio.Paradox.Input.Keys.Left))
                {
                    this.Entity.Transform.Position.X -= 0.1f;
                }
                if (Input.IsKeyDown(SiliconStudio.Paradox.Input.Keys.Right))
                {
                    this.Entity.Transform.Position.X += 0.1f;
                }
            }
        }
    }
}

 

This particular tutorial isn’t actually about how you program Paradox, so don’t pay too much attention to how the code works, that will all be explained later.  Just be aware that both Async and Sync scripts do the same thing, transform the Entity they are attached to along the X axis when the Left or Right arrow keys are pressed.  The important take away points are that your script derive from one of the two mentioned classes, that your script has access to the entity it is attached to and actually has access to the entire game engine, allowing you to do just about anything.  Update() is not the only callback function implemented, there is also one for Start and Cancel available, if you need to do startup or cleanup functionality.

 

One final extremely important note…  MAKE SURE YOUR CLASS IS PUBLIC!   Otherwise it will not be available in the editor!  Sorry, I’ll stop yelling now.

 

Implement one of the two scripts ( or both, it doesn’t matter ), then compile your project to make sure you haven't made any errors.  We are now ready to attach the script to an entity in Paradox Editor.

 

Attaching a Script using Paradox Studio

 

Now that we have a script, we can attach it to one or more entities in our scene.  In an ideal world, Paradox Studio should notice the changes you made, and pop up a dialog telling you so.  Unfortunately, at least right now, it rarely succeeds with the first script you create.  In this case, simple do a quick restart of Studio using the menu, File->Reload project.

image

 

Now in the 3D view, select the entity you want to attach a script to.  If you are unfamiliar with operating Paradox Studio, please refer to this tutorial.  I am going to attach this script to the sphere model created in a default scene:

image

 

Now go to the Property Grid and press the Add Component button and select Scripts from the drop down.

image

 

Now scroll down to the Scripts component that should have been added, Click the green plus sign next to Script, then in the drop down for Item 0, select your script.

image

 

Run your game using the toolbar:

image

 

You can now control the sphere using the arrow keys:

AttachingArrowKeyScript

 

A couple cool things here.  First this shows that the same script can be used to control multiple entities.  We could attach the exact same script to our camera, the light, another model, etc… and it would just work.  Second, you can attach multiple scripts to the same entity.

 

What we didn’t cover

We covered the basics of attach a script to an Entity in Paradox, and I think it should give you a good idea of how you add logic to entities in your game.  There are two things we didn’t cover (yet), that I think it should be important to be aware of before we move on.

 

First, in addition to the SyncScript and AsyncScript classes, there is a third scripting type, StartupScript.  This is a type of script that is called when your object is created.  The major difference is it is not called each frame or async, like the other two scripts.

 

Second is the game class.  If you look in your generated project, in each platform you will see an entry point, like this one for the Windows platform:

image

 

Here are the contents of that script:

using SiliconStudio.Paradox.Engine;

namespace ScriptingDemo
{
    class ScriptingDemoApp
    {
        static void Main(string[] args)
        {
            using (var game = new Game())
            {
                game.Run();
            }
        }
    }
}

 

As you can see, the heart of this script is to create an instance of Game, then call Run().  If you require more control over the lifecycle of your game, you can easily derive your own game from the Game class and create an instance of it instead.  We will see a simple example of this process in the next tutorial.

 

Don’t worry if you are a bit lost on the specifics of the code, I had no intention of explaining how the code actually works, those posts will be coming in the near future.  You should however have a good idea now of how you create a script and attach it to your game entities.

 

The Video

Programming


GFS On YouTube

See More Tutorials on DevGa.me!

Month List