Subscribe to GameFromScratch on YouTube Support GameFromScratch on Patreon
24. August 2015

 

In this tutorial in our ongoing Paradox3d Game Engine tutorial series  we are going to look at controlling a Paradox game engine scene programmatically.  This includes accessing entities created in the editor, creating new entities, loading assets and more.  It should give you a better idea of the relationship between the scene and your code.

 

As always there is an HD video available here.

 

Creating a Simple Script

 

As part of this process we are going to be attaching a script to a scene entity programmatically.  First we need that script to be created.  We covered this process back in this tutorial if you need a brush up.  We are going to create an extremely simple script named BackAndForth.cs, which simply moves the entity back and forth along the x-axis in our scene.  Here is the contents of the script:

using System;
using SiliconStudio.Paradox.Engine;

namespace SceneSelect
{
    public class BackAndForth : SyncScript
    {
        private float currentX = 0f;
        private const float MAX_X = 5f;
        bool goRight = false;
        public override void Update()
        {
            if (Game.IsRunning)
            {
                if (goRight)
                {
                    currentX += 0.1f; 
                }
                else
                {
                    currentX -= 0.1f;
                }

                if (Math.Abs(currentX) > MAX_X)
                    goRight = !goRight;

                Entity.Transform.Position.X = currentX;
            }
        }
    }
}

 

If you've gone through the previous tutorials, this script should require no explanation.  We simply needed an example script that we can use later on.  This one merely moves the attached entity back and forth across the X axis until it reaches + or – MAX_X.

 

Now what we want to do is attach this script to the Sphere entity created in the default scene.  This means we are going to need to be able to locate an entity in code, and perhaps more importantly, we need some code to run.  We could create our own custom Game class like we did last tutorial, but this time we are going to do things a bit different.  Instead we are going to create a StartupScript.

 

First we new to create a new empty Entity in our scene to attach the script component to.  I called mine Config:

image


Next we create the Script we are going to attach.  Start with the following extremely simple script, Startup.cs:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using SiliconStudio.Paradox.Engine;
using SiliconStudio.Paradox.Rendering;

namespace SceneSelect
{
    public class Startup : StartupScript
    {
        public override void Start()
        {
            base.Start();
        }
    }
}

 

A StartupScript is a type of script that is loaded, as you may guess, on start. Unlike the Sync/AsyncScript classes we used earlier, there is no per frame update callback occurring.  This makes StartupScripts very useful for exactly this type of configuration tasks.

 

Now that we have our script, let’s attach it to our entity:

image

 

Finding an Entity using Code

 

First we are going to look at the process of locating an entity creating in Paradox Studio using code.  The following code will select the Sphere from the default scene using LINQ.

    var sphere = (from entities in this.SceneSystem.SceneInstance
                    where entities.Components.ContainsKey(ModelComponent.Key)
                    select entities).FirstOrDefault();

You can get the currently active scene using SceneSystem.SceneInstance, which contains a simple collection of Entity objects.  We then filter by entities with Components of type ModelComponent.  There are many ways we could have accomplished the same thing.  This query actually returns all entities in the scene that have a ModelComponent attached, which is overkill.  We could also select by the entities Name attribute:

image

Using the code:

    var sphere = (from entities in this.SceneSystem.SceneInstance
                    where entities.Name == "Sphere" select entities).
                                         FirstOrDefault();
    if (sphere == null) return;

 

Attaching a Script Component Programmatically

 

    ScriptComponent scriptComponent = new ScriptComponent();
    scriptComponent.Scripts.Add(new BackAndForth());
    sphere.Components.Add<ScriptComponent>(ScriptComponent.Key, scriptComponent);

 

Now that we have a reference to our sphere Entity, adding a new component is pretty simple.  Remember that the ScriptComponent is a collection of Script objects.  Simply Add() an instance of our newly created BackAndForth script.  Finally attach a ScriptComponent to our Sphere’s Components collection. 

 

When we run this code we will see:

BackAndForth

 

Creating a new Entity

 

We can also create another entity programmatically.

    Entity entity = new Entity(position: new SiliconStudio.Core.Mathematics.
                    Vector3(0, 0, 1), name: "MyEntity");
    var model = (SiliconStudio.Paradox.Rendering.Model)Asset.Get(typeof(
                SiliconStudio.Paradox.Rendering.Model), "Sphere");
    ModelComponent modelComponent = new ModelComponent(model);
    entity.Add<ModelComponent>(ModelComponent.Key, modelComponent);

 

Here we create a new entity with the name “MyEntity” and set it’s location to (0,0,1).  Next we get a reference to the ProceduralModel created in Paradox Studio, with a call to Asset.Get() specifying the type and URL ( you can see the Url value by mouse overing the asset in the Asset Viewer panel in Studio).  Now we create a new ModelComponent using this Model.  (Keep in mind, changes to it the Model will affect all instances, as I will show momentarily).  Finally we add the ModelComponent to the entity.

Finally we add our newly created entity to the scene using:

    SceneSystem.SceneInstance.Scene.AddChild(entity);

Now when we run the code:

BackAndForth2

 

As I mentioned earlier, changes to the Model will affect all instances.  For example, let’s say we create a new Material in the editor and apply it to the model.

image

Now the code:

    Entity entity = new Entity(position: new SiliconStudio.Core.Mathematics.
                    Vector3(0, 0, 1), name: "MyEntity");
    var model = (SiliconStudio.Paradox.Rendering.Model)Asset.Get(typeof(
                SiliconStudio.Paradox.Rendering.Model), "Sphere");
    var material = Asset.Load<Material>("MyMaterial");
    model.Materials.Clear();
    model.Materials.Add(new MaterialInstance(material));
    ModelComponent modelComponent = new ModelComponent(model);
    entity.Add<ModelComponent>(ModelComponent.Key, modelComponent);

And the (non-animated) result:

image

As you can see, the material on all of the Spheres has been replaced.  If you do not want this behaviour, you will have to create a new Model, either in Studio or programmatically.

 

New Entity using Clone

 

We could have also created our entity using the Clone() method of our existing Entity.

    var anotherSphere = sphere.Clone();
    sphere.Transform.Position.Z = 1f;
    SceneSystem.SceneInstance.Scene.AddChild(anotherSphere);

Keep in mind, the close will get all of the components of the cloned Entity, so if we clone after we add the ScriptComponent, it will also have the script attached.

 

 

Our complete source example:

using System.Linq;
using SiliconStudio.Paradox.Engine;
using SiliconStudio.Paradox.Rendering;

namespace SceneSelect
{
    public class Startup : StartupScript
    {
        public override void Start()
        {
            base.Start();

            var sphere = (from entities in this.SceneSystem.SceneInstance
                         where entities.Components.ContainsKey(ModelComponent.
                                                               Key)
                         select entities).FirstOrDefault();
            //var sphere = (from entities in this.scenesystem.sceneinstance
            //              where entities.name == "sphere" select entities).
                                                 firstordefault();
            //if (sphere == null) return;

            ScriptComponent scriptComponent = new ScriptComponent();
            scriptComponent.Scripts.Add(new BackAndForth());
            sphere.Components.Add<ScriptComponent>(ScriptComponent.Key, 
                                                   scriptComponent);


            Entity entity = new Entity(position: new SiliconStudio.Core.
                            Mathematics.Vector3(0, 0, 1), name: "MyEntity");
            var model = (SiliconStudio.Paradox.Rendering.Model)Asset.Get(typeof(
                        SiliconStudio.Paradox.Rendering.Model), "Sphere");
            var material = Asset.Load<Material>("MyMaterial");
            model.Materials.Clear();
            model.Materials.Add(new MaterialInstance(material));
            ModelComponent modelComponent = new ModelComponent(model);
            entity.Add<ModelComponent>(ModelComponent.Key, modelComponent);

            SceneSystem.SceneInstance.Scene.AddChild(entity);

            var anotherSphere = sphere.Clone();
            sphere.Transform.Position.Z = 1f;
            SceneSystem.SceneInstance.Scene.AddChild(anotherSphere);
        }
    }
}

And, running:

BackAndForth3

 

The Video

 

Programming


21. August 2015

Today marks the official release of jMonkeyEngine 3.1 alpha. Generally I wouldn't make a news post over a minor alpha release but a) jme has been pretty quite lately b) I'm currently looking at this engine right now c) it's a pretty massive release.

In addition to underlying changes like a move to github, transition from ant to gradle build systems and implementation of a commenting system that isnt from the 90s, there are some pretty huge new features, such as iOS support, FBX importing, VR support, render optimizations and much more.

 

The full release notes follow:

 

At long last, we have our first alpha release for the jMonkeyEngine 3.1 SDK.

Go get it on GitHub and start breaking things.

Not only does this release mark the introduction of some absolutely game-changing features (or shall we say, abbreviations: iOS, FBX, VR!); it also marks a significant step forward in jME’s underlying infrastructure. In the following weeks, we will explain each and every one of these changes in depth.

All the same bits, structured differently

  • First, we switched from using Google Code (SVN) to GitHub (Git) for
    our source code repository.
  • And then, as if that wasn’t enough, we went from using ANT
    for our build system to using Gradle.
  • We also migrated our forum to the ever more awesome Discourse, which was followed by a series of website updates, with more to come.

These structural changes will allow us to do our work more effectively, and with the combined power of GitHub and Discourse, we’re already seeing a big uptake in contributions and overall user participation.

Unified Renderer Architecture

Previously, there would be a Renderer implementation for each platform that jME3 supported, but all of these platforms supported OpenGL, so in the end, this led to a lot of code duplication. Each time we wanted to add a new renderer feature, all existing renderer implementations had to be modified in the same way.

The new unified renderer architecture means there’s only 1 Renderer implementation, “GLRenderer”, which then calls into GL interfaces implemented by each back-end – this is much easier to maintain. It means easier modification of renderer internals, including performance improvements, as well as the ability to add really advanced features to the renderer that wasn’t possible before. As a consequence, the OpenGL 1 renderer is now out, nobody will ever miss it and (probably) nobody used it for anything. There were some other changes around the rendering pipeline, reduce useless work and improve performance.

OpenGL 3 Core Profile Support

This is a significant improvement especially on Mac OS X and Linux where using the Core Profile actually allows more features to be used than otherwise. Do note that many jME3 shaders don’t support GLSL 1.5 which is required on some platforms when using OpenGL Core Profile – this is being worked on …

Geometry / Tesselation Shader Support

Added support for specifying geometry and tessellation shaders in the material definition. Note that this requires hardware capable of running such shaders. This feature is not used in the engine itself for any capability.

Scene Graph Optimizations

Previously, the engine would need to recurse into the scene graph 3 times every frame, even if nothing was changed! This has been improved so only the branches of the scene graph that require updating or rendering are actually walked into. This equals big performance boost for mostly static and large scenes. The only kind of scenes that don’t benefit from this are scenes where all objects and lights are constantly moving and the entire scene is visible in the camera the whole time. Those kinds of scenes are very rare!

In addition, hardware skinning is now enabled by default, which means a big speed boost when there are many animated models on screen.

Lighting Boost

Remy “nehon” already made a post about this which you can read here. With both single pass lighting and light culling you can now expect big performance improvements in large scenes with many lights. – When rendering shadows for lights, only casters that are inside the light’s area of influence are rendered.

FBX Importer (Beta!)

There’s a beta quality FBX importer currently in development. Unfortunately skeletal animation is not supported yet, but once it is finished, it should replace the semi-functional OgreXML support and hopefully be on par with the .blend importer.

Geometry Instancing

If you want to render a certain (complex) model many times in different places. E.g. a forest or asteroid field, you can use InstancedNode (requires OpenGL3 and higher support!)

Rewritten Audio Streaming

If you were using audio streams before, you might have noticed that they have quite a lot of limitations. They cannot be looped, reset, or stopped without the audio stream becoming useless. The new changes mean you can now stop, loop, or reset audio streams with ease. Also, updates about audio finishing playing now occur every frame instead of every 50 milliseconds (e.g. if you were relying on it for any events & such)

Further, there’s a new capability to determine current playback position of an audio source. Can be used to synchronize events or video to an audio stream.

Networking Improvements

HostedServices: Essentially like AppStates, but independent of jME3 Application infrastructure.

Gamma-correct lighting and high dynamic range rendering

Gamma-correct lighting – basically means lighting looks better or more realistic, or both. Oh, and if you’re planning on using this, you better make sure its always on because your scene will look different depending on if its on or off. While at it, you can also use the new tonemap filter for HDR rendering. The tonemap algorithm is based on a filmic curve from Uncharted 2.

Profiling Frame Times

With the app profiler state, you can see how long each part of a frame takes, e.g. rendering or updates, thus allowing you to detect stuttering parts in the game and optimize them.

iOS Improvements

  • Now iOS support is mostly stable (but still behind Android support). More testing is needed.
  • Texture loading issue fixed.
  • Audio support now enabled.

Android Bugfixes & Improvements

  • Texture decoding is now handled by C++ code so loading time is now much shorter. This also means the terrain alphamap issue is fixed. Previously you had to flip the alpha channel to use terrain on Android, this is no longer required.
  • OGG/Vorbis audio decoding is now handled by C++ code. This allows using the native OpenAL Soft audio library to handle audio instead of the Android built-in MediaPlayer, hence 3D audio, doppler effects, and reverb is now supported.
  • Support for Android Fragments (on Android 4.0+)
  • Added support for joysticks. For example, you can connect your Xbox 360 controller to your Android tablet and it will show up as an actual joystick in jME.

Blender Importer

  • Improved support for models animated with IK (inverse kinematics)
  • Support for loading linked .blend files

SDK Editor Improvements

dark_monkey

  • Enhanced shader node editor with many issues fixed.
  • 3D Scale / Rotate Tool.
  • New “DarkMonkey” theme which matches the forum theme (you have to enable it manually under the Look and Feel settings)

Bullet Physics

  • Added capability to change number of solver iterations – aka “physics accuracy”.
  • Added support native sweep test (previously was unimplemented)
  • Fixes to native ray test (previously was broken / crashing)
  • Allow 3D vector linear and angular factor instead of just a scalar factor

Misc Engine

  • Print out current build branch / tag / revision / hash in log

Misc Bugfixes

  • Fix inconsistent mouse coordinate origin on AWT panels
  • Fix translucent bucket on AWT panels
  • Fix using texture arrays with GPU compressed textures
  • Fix building engine on JDK8 and latest Android NDK
  • Fix point sprites on Android
  • Fix post-processing / FBO on Android
  • Fix running jME3 in the Android emulator
  • Fix shadow effect Z fade feature
  • Fix compilation issues on Java 1.8
  • Fix broken Material.preload() method
  • Fix water filter not working on GPUs without OpenGL 3 support
  • Fix crashing filter multisample support on OpenGL 3.2 contexts
  • Fix bounding volume not updated when geometry inside BatchNode is modified
  • Fix incorrect flipping of 2×2 DXT5 images
  • Fix audio source reverb being enabled by default
  • Fix batching with vertex colored meshes
  • And a trillion other bug fixes I forgot to mention, so you better start using jME 3.1 today!

 

News


20. August 2015

 

In this chapter we start looking at 3D game development using MonoGame.  Previously I called XNA a low level code focused engine and you are about to understand why.  If you come from a higher level game engine like Unity or even LibGDX you are about to be in for a shock.  Things you may take for granted in other engines/libraries, like cameras, are your responsibility in Monogame.  Don’t worry though, it’s not all that difficult.

 

This information is also available in HD Video.

 

This chapter is going to require some prior math experience, such as an understanding of Matrix mathematics.  Unfortunately teaching such concepts if far beyond the scope of what we can cover here without adding a few hundred more pages!  If you need to brush up on the underlying math, the Khan Academy is a very good place to start.  There are also a few books dedicated to teaching gamedev related math including 3D Math Primer for Graphics and Game Development and Mathematics for 3D Game Programming and Computer Graphics.  Don’t worry, Monogame/XNA provide the Matrix and Vector classes for you, but it’s good to understand when to use them and why.

 

Our First 3D Application

 

This might be one of those topics that’s easier explained by seeing.  So let’s jump right in with an example and follow it up with explanation.  This example creates then displays a simple triangle about the origin, then creates a user controlled camera that can orbit and zoom in/out on said triangle.

using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Input;

namespace Test3D
{

    public class Test3DDemo : Game
    {
        GraphicsDeviceManager graphics;
        SpriteBatch spriteBatch;

        //Camera
        Vector3 camTarget;
        Vector3 camPosition;
        Matrix projectionMatrix;
        Matrix viewMatrix;
        Matrix worldMatrix;

        //BasicEffect for rendering
        BasicEffect basicEffect;

        //Geometric info
        VertexPositionColor[] triangleVertices;
        VertexBuffer vertexBuffer;

        //Orbit
        bool orbit = false;

        public Test3DDemo()
        {
            graphics = new GraphicsDeviceManager(this);
            Content.RootDirectory = "Content";
        }

        protected override void Initialize()
        {
            base.Initialize();

            //Setup Camera
            camTarget = new Vector3(0f, 0f, 0f);
            camPosition = new Vector3(0f, 0f, -100f);
            projectionMatrix = Matrix.CreatePerspectiveFieldOfView(
                               MathHelper.ToRadians(45f), 
                               GraphicsDevice.DisplayMode.AspectRatio,
                1f, 1000f);
            viewMatrix = Matrix.CreateLookAt(camPosition, camTarget, 
                         new Vector3(0f, 1f, 0f));// Y up
            worldMatrix = Matrix.CreateWorld(camTarget, Vector3.
                          Forward, Vector3.Up);

            //BasicEffect
            basicEffect = new BasicEffect(GraphicsDevice);
            basicEffect.Alpha = 1f;

            // Want to see the colors of the vertices, this needs to 
            be on
            basicEffect.VertexColorEnabled = true;

            //Lighting requires normal information which 
            VertexPositionColor does not have
            //If you want to use lighting and VPC you need to create a 
            custom def
            basicEffect.LightingEnabled = false;

            //Geometry  - a simple triangle about the origin
            triangleVertices = new VertexPositionColor[3];
            triangleVertices[0] = new VertexPositionColor(new Vector3(
                                  0, 20, 0), Color.Red);
            triangleVertices[1] = new VertexPositionColor(new Vector3(-
                                  20, -20, 0), Color.Green);
            triangleVertices[2] = new VertexPositionColor(new Vector3(
                                  20, -20, 0), Color.Blue);

            //Vert buffer
            vertexBuffer = new VertexBuffer(GraphicsDevice, typeof(
                           VertexPositionColor), 3, BufferUsage.
                           WriteOnly);
            vertexBuffer.SetData<VertexPositionColor>(triangleVertices)
                                                      ;
        }

        protected override void LoadContent()
        {
            spriteBatch = new SpriteBatch(GraphicsDevice);
        }

        protected override void UnloadContent()
        {
        }

        protected override void Update(GameTime gameTime)
        {
            if (GamePad.GetState(PlayerIndex.One).Buttons.Back == 
                ButtonState.Pressed || Keyboard.GetState().IsKeyDown(
                Keys.Escape))
                Exit();

            if (Keyboard.GetState().IsKeyDown(Keys.Left))
            {
                camPosition.X -= 1f;
                camTarget.X -= 1f;
            }
            if (Keyboard.GetState().IsKeyDown(Keys.Right))
            {
                camPosition.X += 1f;
                camTarget.X += 1f;
            }
            if (Keyboard.GetState().IsKeyDown(Keys.Up))
            {
                camPosition.Y -= 1f;
                camTarget.Y -= 1f;
            }
            if (Keyboard.GetState().IsKeyDown(Keys.Down))
            {
                camPosition.Y += 1f;
                camTarget.Y += 1f;
            }
            if(Keyboard.GetState().IsKeyDown(Keys.OemPlus))
            {
                camPosition.Z += 1f;
            }
            if (Keyboard.GetState().IsKeyDown(Keys.OemMinus))
            {
                camPosition.Z -= 1f;
            }
            if (Keyboard.GetState().IsKeyDown(Keys.Space))
            {
                orbit = !orbit;
            }

            if (orbit)
            {
                Matrix rotationMatrix = Matrix.CreateRotationY(
                                        MathHelper.ToRadians(1f));
                camPosition = Vector3.Transform(camPosition, 
                              rotationMatrix);
            }
            viewMatrix = Matrix.CreateLookAt(camPosition, camTarget, 
                         Vector3.Up);
            base.Update(gameTime);
        }

        protected override void Draw(GameTime gameTime)
        {
            basicEffect.Projection = projectionMatrix;
            basicEffect.View = viewMatrix;
            basicEffect.World = worldMatrix;

            GraphicsDevice.Clear(Color.CornflowerBlue);
            GraphicsDevice.SetVertexBuffer(vertexBuffer);

            //Turn off culling so we see both sides of our rendered 
            triangle
            RasterizerState rasterizerState = new RasterizerState();
            rasterizerState.CullMode = CullMode.None;
            GraphicsDevice.RasterizerState = rasterizerState;

            foreach(EffectPass pass in basicEffect.CurrentTechnique.
                    Passes)
            {
                pass.Apply();
                GraphicsDevice.DrawPrimitives(PrimitiveType.
                                              TriangleList, 0, 3);
            }
            
            base.Draw(gameTime);
        }
    }
}

Alright… that’s a large code sample, but don’t worry, it’s not all that complicated.  At a top level what we do here is create a triangle oriented about the origin.  We then create a camera, offset –100 units along the z-axis but looking at the origin.  We then respond to keyboard, panning the camera in response to the arrow keys, zooming in and out in response to the plus and minus key and toggling orbit using the space bar.  Now let’s take a look at how we accomplish all of this.

 

First, when I said we create a camera, that is a misnomer, in fact we are creating three different Matrices (singular – Matrix), the View, Projection and World matrix.  These three matrices are combined to help position elements in your game world.  Let’s take a quick look at the function of each.

 

View Matrix  The View Matrix is used to transform coordinates from World to View space.  A much easier way to envision the View matrix is it represents the position and orientation of the camera.  It is created by passing in the camera location, where the camera is pointing and by specifying which axis represents “Up” in the universe.  XNA uses a Y-up orientation, which is important to be aware of when creating 3D models.  Blender by default treats Z as the up/down axis, while 3D Studio MAX uses the Y-axis as “Up”.

Projection Matrix The Projection Matrix is used to convert 3D view space to 2D.  In a nutshell, this is your actual camera lens and is created by specifying calling CreatePerspectiveFieldOfView() or CreateOrthographicFieldOfView().  With Orthographic projection, the size of things remain the same regardless to their “depth” within the scene.  For Perspective rendering it simulates the way an eye works, by rendering things smaller as they get further away.  As a general rule, for a 2D game you use Orthographic, while in 3D you use Perspective projection.  When creating a Perspective view we specify the field of view ( think of this as the degrees of visibility from the center of your eye view ), the aspect ratio ( the proportions between width and height of the display ), near and far plane ( minimum and maximum depth to render with camera… basically the range of the camera ).  These values all go together to calculate something called the view frustum, which can be thought of as a pyramid in 3D space representing what is currently available.

World Matrix The World matrix is used to position your entity within the scene.  Essentially this is your position in the 3D world.  In addition to positional information, the World matrix can also represent an objects orientation.

 

So nutshell way to think of it:

View Matrix –> Camera Location

Projection Matrix –> Camera Lens

World Matrix –> Object Position/Orientation in 3D Scene

 

By multiplying these three Matrices together we get the WorldProjView matrix, or a magic calculation that can turn a 3D object into pixels.

What value should I use for Field of View?

You may notice in this example I used a relatively small value of 45 degrees in this example.  What you may ask is the ideal setting for field of view?  Well, there isn’t one, although there are some commonly accepted values.  Human beings generally have a field of view of about 180 degrees, but this includes peripheral vision.  This means if you hold your hands straight out you should be able to just see them out of the edge of your vision.  Basically if its in front of you, you can see it.

However video games, at least not taking into account VR headset games, don’t really use the peripherals of your visual space.   Console games generally set of a field of view of about 60 degrees, while PC games often set the field of view higher, in the 80-100 degree range.  The difference is generally due to the size of the screen viewed and the distance from it.  The higher the field of view, the more of the scene that will be rendered on screen.

 

Next up we have the BasicEffect.  Remember how earlier we used a SpriteBatch to draw sprites on screen?  Well the BasicEffect is the 3D equivalent.  In reality it’s a wrapper over a HLSL shader responsible for rendering things to the screen.  Now HLSL coverage is way beyond the scope of what we can cover here, but basically it’s the instructions to the shader units on your graphic card telling how to render things.  Although I can’t go into a lot of details about how HLSL work, you are in luck, as Microsoft actually released the shader code used to create BasicEffect in the Stock Effect sample available at http://xbox.create.msdn.com/en-US/education/catalog/sample/stock_effects.  In order for BasicEffect to work it needs the View, Projection and Matrix matrixes specified, thankfully we just calculated all three of these.

 

Finally at the end of Intialize() we create an array of VertexPositionColor, which you can guess is a Vertex with positional and color data.  We then copy the triangle data to a VertexBuffer using a call to SetData().  You may be thinking to yourself… WOW, doesn’t XNA have simple primitives like this built in?  No, it doesn’t, although there are easy community examples you can download such as this one: http://xbox.create.msdn.com/en-US/education/catalog/sample/primitives_3d.

 

The logic in Update() is quite simple.  We check for input from the user and respond accordingly.  In the event of arrow keys being pressed, or +/- keys, we change the cameraPosition.  At the end of the update we then recalcuate the View matrix using our new Camera position.  Also in response to the space bar, we toggle orbiting the camera and if we are orbiting, we rotate the camera by another 1 degree relative to the origin.  Basically this shows how easy it is to update the camera by changing the viewMatrix.  Note the Projection Matrix generally isn’t updated after creation, unless the resolution changes.

 

Finally we come to our Draw() call.  Here we set the view, projection and world matrix of the BasicEffect, clear the screen, load our VertexBuffer into the GraphicsDevice calling SetVertexBuffer().  Next we create a RasterState object and set culling off.  We do this so we don’t cull back faces, which would result in our triangle from being invisible when we rotate behind it.  Often you actually want to cull back faces, no sense drawing vertices that aren’t visible!  Finally we load through each of the Techniques in the BasicEffect ( look at the BasicEffect.fx HLSL file and this will make a great deal more sense.  Otherwise stay tuned for when we cover custom shaders later on ), finally we draw our triangle data to screen by calling DrawPrimitives, in this case it’s a TriangleList.  There are other options such as lines and triangle strips, you are basically telling it what kind of data is in the VertexBuffer.

I’ll admit, compared to many other engines, that’s a heck of a lot of code to just draw a triangle on screen!  Reality is though, you generally write this code once and that’s it.  Or you work at a higher level, such as with 3D models imported using the content pipeline.

 

Loading and Displaying 3D Models

 

Next we take a look at the process of bringing a 3D model in from a 3D application, in this case Blender.  The process of creating such a model is well beyond the scope of this tutorial, although I have created a video showing the entire process available right here.  Or you can simply download the created COLLADA file and texture.

Which File Format works Best?


The MonoGame pipeline tool relies on an underlying library named Assimp for loading 3D models. You may wonder which if the many model formats supported should you use if exporting from Blender? FBX and COLLADA(dae) are the two most commonly used formats, while X and OBJ can often be used reliably with very simple non-animated meshes. That said, exporting from Blender is always a tricky prospect, and its a very good idea to use a viewer like the one included in the FBX Converter package to verify your exported model looks correct.

The above video also illustrates adding the model and texture using the content pipeline.  I won’t cover the process here as it works identically to when we used the content pipeline earlier.  Let’s jump right in to the code instead:

using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Input;

namespace Test3D
{

    public class Test3DDemo2 : Game
    {
        GraphicsDeviceManager graphics;
        SpriteBatch spriteBatch;

        //Camera
        Vector3 camTarget;
        Vector3 camPosition;
        Matrix projectionMatrix;
        Matrix viewMatrix;
        Matrix worldMatrix;

        //Geometric info
        Model model;

        //Orbit
        bool orbit = false;

        public Test3DDemo2()
        {
            graphics = new GraphicsDeviceManager(this);
            Content.RootDirectory = "Content";
        }

        protected override void Initialize()
        {
            base.Initialize();

            //Setup Camera
            camTarget = new Vector3(0f, 0f, 0f);
            camPosition = new Vector3(0f, 0f, -5);
            projectionMatrix = Matrix.CreatePerspectiveFieldOfView(
                               MathHelper.ToRadians(45f), graphics.
                               GraphicsDevice.Viewport.AspectRatio,
                1f, 1000f);
            viewMatrix = Matrix.CreateLookAt(camPosition, camTarget, 
                         new Vector3(0f, 1f, 0f));// Y up
            worldMatrix = Matrix.CreateWorld(camTarget, Vector3.
                          Forward, Vector3.Up);

            model = Content.Load<Model>("MonoCube");
        }

        protected override void LoadContent()
        {
            spriteBatch = new SpriteBatch(GraphicsDevice);
        }

        protected override void UnloadContent()
        {
        }

        protected override void Update(GameTime gameTime)
        {
            if (GamePad.GetState(PlayerIndex.One).Buttons.Back == 
                ButtonState.Pressed || Keyboard.GetState().IsKeyDown(
                Keys.Escape))
                Exit();

            if (Keyboard.GetState().IsKeyDown(Keys.Left))
            {
                camPosition.X -= 0.1f;
                camTarget.X -= 0.1f;
            }
            if (Keyboard.GetState().IsKeyDown(Keys.Right))
            {
                camPosition.X += 0.1f;
                camTarget.X += 0.1f;
            }
            if (Keyboard.GetState().IsKeyDown(Keys.Up))
            {
                camPosition.Y -= 0.1f;
                camTarget.Y -= 0.1f;
            }
            if (Keyboard.GetState().IsKeyDown(Keys.Down))
            {
                camPosition.Y += 0.1f;
                camTarget.Y += 0.1f;
            }
            if (Keyboard.GetState().IsKeyDown(Keys.OemPlus))
            {
                camPosition.Z += 0.1f;
            }
            if (Keyboard.GetState().IsKeyDown(Keys.OemMinus))
            {
                camPosition.Z -= 0.1f;
            }
            if (Keyboard.GetState().IsKeyDown(Keys.Space))
            {
                orbit = !orbit;
            }

            if (orbit)
            {
                Matrix rotationMatrix = Matrix.CreateRotationY(
                                        MathHelper.ToRadians(1f));
                camPosition = Vector3.Transform(camPosition, 
                              rotationMatrix);
            }
            viewMatrix = Matrix.CreateLookAt(camPosition, camTarget, 
                         Vector3.Up);
            base.Update(gameTime);
        }

        protected override void Draw(GameTime gameTime)
        {
            GraphicsDevice.Clear(Color.CornflowerBlue);

            foreach(ModelMesh mesh in model.Meshes)
            {
                foreach(BasicEffect effect in mesh.Effects)
                {
                    //effect.EnableDefaultLighting();
                    effect.AmbientLightColor = new Vector3(1f, 0, 0);
                    effect.View = viewMatrix;
                    effect.World = worldMatrix;
                    effect.Projection = projectionMatrix;
                }
                mesh.Draw();
            }
            base.Draw(gameTime);
        }
    }
}

It operates almost identically to when we created the triangle by hand, except that model is loaded using a call to Content.Load<Model>().  The other major difference is you no longer have to create a BasicEffect, one is automatically created for you as part of the import process and is stored in the Mesh’s Effects property.  Simply loop through each effect, setting up the View, Projection and World matrix values, then call Draw().  If you have a custom effect you wish to use instead of the generated Effects, you can follow the process documented here: https://msdn.microsoft.com/en-us/library/bb975391(v=xnagamestudio.31).aspx.

 

The Video

Programming


14. August 2015

 

In this part of the ongoing Paradox3D Game Engine tutorial series we are going to accomplish two tasks.  First we are going to show how to set the resolution of our game in Paradox Studio.  We will then look at an example of extending Game and implementing the same thing using code.  This will be a fairly short tutorial, but needed, as the process isn’t entirely intuitive.

 

As always, there is an HD video version of this tutorial available here.

 

Setting the Screen Resolution using Paradox Studio

 

The process of setting the resolution is incredibly easy, but certainly not intuitive.  To set the resolution, in Solution Explorer within Paradox Studio, right click the game package ( FullScreen in my case ), then select Package properties.

image

 

Then in the Property grid, set the width and height desired:

image 

And done.

 

Extending Game

 

Create a new class in your .Game project, I’m calling mine MyGame.cs.  Now enter the following code:

using SiliconStudio.Paradox.Engine;

namespace FullScreen
{
    public class MyGame : Game
    {
        protected override void Initialize()
        {
            // Set the window size to 720x480
            GraphicsDeviceManager.PreferredBackBufferWidth = 720;
            GraphicsDeviceManager.PreferredBackBufferHeight = 480;

            base.Initialize();
        }
    }
}

This code simply sets the resolution by using GraphicsDeviceManager to set the PreferredBackBufferWidth and Height to our desired dimensions.  Initialize is called after your applications constructor, but before a window is displayed, making it an ideal location to set the resolution.  Why preferred?  Well because frankly outside of desktop platforms (mobile), you often don’t have control over the window size.  Like the previous tutorial, it’s very important to remember to make your class public.

 

Please note, Initialize() is just one point in the application lifecycle, there are several other protected methods you can override to gain much more precise control over the lifecycle of your game:

image

 

Now that we have created our own custom game class, we now need to update the entry point for each target platform to create an instance of our new class instead of using Game.

image

 

Edit the ___App.cs file accordingly:

 

using SiliconStudio.Paradox.Engine;

namespace FullScreen
{
    class FullScreenApp
    {
        static void Main(string[] args)
        {
            using (var game = new FullScreen.MyGame())
            {
                game.Run();
            }
        }
    }
}

 

 

The Video

Programming


14. August 2015

 

In this part of the Paradox3D game engine tutorial series we are now going to look at how you actually program your games.  In the end you will discover that it’s actually a pretty straightforward process, but could certainly use some streamlining.  ( The option to generate a .cs file when you add a script component would be a nice little time saver… ).  Minor quibble however… let’s jump in.  The code in this particular example was written to target version 1.2.  If the code doesn’t work any more, be sure to check the comments for suggestions.  If there is no fix there, please email me.

 

As always, there is an HD video of this process available here or embedded below.

 

Creating a new Script

Scripting in Paradox is a two step process.  First you create the script, generally in Visual Studio.  Then you attach the script to an entity, either programmatically, or using the editor.  We are going to look at the process of creating the script first.

 

In Visual Studio, inside your .Game folder, create a new cs file.

image

 

I personally called mine ExampleScript, outside of standard variable naming requirements, the name really doesn't matter.  We now have two options as to how we want to implement our script.  It can either be a SyncScript or AsyncScript, we will show an example of both. 

A SyncScript as the name suggests, runs Syncronously.  That is, the game loop iterates over and over and we frame our script’s update function is called and we handle the logic of our script.  An AsyncScript on the other hand, takes advantage of C# 5’s async functionality, and allows your script to run in parallel.  This could lead to performance gains on multi processor machines.  Which works best is ultimately up to you and your game’s design.

SyncScript example:

using System;
using SiliconStudio.Paradox.Engine;

namespace ScriptingDemo
{
    public class ExampleScriptSync : SyncScript
    {
        public override void Update()
        {

            if (Game.IsRunning)
            {
                if (Input.IsKeyDown(SiliconStudio.Paradox.Input.Keys.Left))
                {
                    this.Entity.Transform.Position.X -= 0.1f;
                }
                if (Input.IsKeyDown(SiliconStudio.Paradox.Input.Keys.Right))
                {
                    this.Entity.Transform.Position.X += 0.1f;
                }
            }
        }
    }
}

 

AsyncScript example:

using System;
using System.Threading.Tasks;
using SiliconStudio.Paradox.Engine;

namespace ScriptingDemo
{
    public class ExampleScriptAsync : AsyncScript
    {
        public override async Task Execute()
        {
            while (Game.IsRunning)
            {
                await Script.NextFrame();

                if (Input.IsKeyDown(SiliconStudio.Paradox.Input.Keys.Left))
                {
                    this.Entity.Transform.Position.X -= 0.1f;
                }
                if (Input.IsKeyDown(SiliconStudio.Paradox.Input.Keys.Right))
                {
                    this.Entity.Transform.Position.X += 0.1f;
                }
            }
        }
    }
}

 

This particular tutorial isn’t actually about how you program Paradox, so don’t pay too much attention to how the code works, that will all be explained later.  Just be aware that both Async and Sync scripts do the same thing, transform the Entity they are attached to along the X axis when the Left or Right arrow keys are pressed.  The important take away points are that your script derive from one of the two mentioned classes, that your script has access to the entity it is attached to and actually has access to the entire game engine, allowing you to do just about anything.  Update() is not the only callback function implemented, there is also one for Start and Cancel available, if you need to do startup or cleanup functionality.

 

One final extremely important note…  MAKE SURE YOUR CLASS IS PUBLIC!   Otherwise it will not be available in the editor!  Sorry, I’ll stop yelling now.

 

Implement one of the two scripts ( or both, it doesn’t matter ), then compile your project to make sure you haven't made any errors.  We are now ready to attach the script to an entity in Paradox Editor.

 

Attaching a Script using Paradox Studio

 

Now that we have a script, we can attach it to one or more entities in our scene.  In an ideal world, Paradox Studio should notice the changes you made, and pop up a dialog telling you so.  Unfortunately, at least right now, it rarely succeeds with the first script you create.  In this case, simple do a quick restart of Studio using the menu, File->Reload project.

image

 

Now in the 3D view, select the entity you want to attach a script to.  If you are unfamiliar with operating Paradox Studio, please refer to this tutorial.  I am going to attach this script to the sphere model created in a default scene:

image

 

Now go to the Property Grid and press the Add Component button and select Scripts from the drop down.

image

 

Now scroll down to the Scripts component that should have been added, Click the green plus sign next to Script, then in the drop down for Item 0, select your script.

image

 

Run your game using the toolbar:

image

 

You can now control the sphere using the arrow keys:

AttachingArrowKeyScript

 

A couple cool things here.  First this shows that the same script can be used to control multiple entities.  We could attach the exact same script to our camera, the light, another model, etc… and it would just work.  Second, you can attach multiple scripts to the same entity.

 

What we didn’t cover

We covered the basics of attach a script to an Entity in Paradox, and I think it should give you a good idea of how you add logic to entities in your game.  There are two things we didn’t cover (yet), that I think it should be important to be aware of before we move on.

 

First, in addition to the SyncScript and AsyncScript classes, there is a third scripting type, StartupScript.  This is a type of script that is called when your object is created.  The major difference is it is not called each frame or async, like the other two scripts.

 

Second is the game class.  If you look in your generated project, in each platform you will see an entry point, like this one for the Windows platform:

image

 

Here are the contents of that script:

using SiliconStudio.Paradox.Engine;

namespace ScriptingDemo
{
    class ScriptingDemoApp
    {
        static void Main(string[] args)
        {
            using (var game = new Game())
            {
                game.Run();
            }
        }
    }
}

 

As you can see, the heart of this script is to create an instance of Game, then call Run().  If you require more control over the lifecycle of your game, you can easily derive your own game from the Game class and create an instance of it instead.  We will see a simple example of this process in the next tutorial.

 

Don’t worry if you are a bit lost on the specifics of the code, I had no intention of explaining how the code actually works, those posts will be coming in the near future.  You should however have a good idea now of how you create a script and attach it to your game entities.

 

The Video

Programming


AppGameKit Studio

See More Tutorials on DevGa.me!

Month List