Today I focused on getting a simple physics simulation running. The main work consisted of creating a collision detection system based on the BSP tree. I went for the well described approach from this thesis I have been using previously. I have to say that I don’t like the result yet. There are occasional slips where it seems to miss a polygon. Also slopes and stairs I did not care for yet. Jumping around the world is good enough for now. If I have time, I might go for this more solid “sweep”-based approach.
I also today implemented the split-plane algorithm/metric for the BSP tree generation. It is a very hard problem (actually it is NP-hard :-)) to find a good metric for the tree to be balanced and with minimal number of polygon splits. Currently, some rooms are completly covered in one convex leaf of the tree, but others are split because the algorithm sometimes has to favor a balanced split. I might try on improving the algorithm and add some additional weight if a split produces a convex subregion. This should then cater for all simple convex room to end up in their own leaf.
Here a screenshot of a room in the world, were each color represent one leaf in the BSP. Note how the whole room, because it’s convex, has ended up in a single leaf.
Oh; and I started on the portalizing of the world for the culling algorithm. I am not sure yet if I will be precalculation potential visibility sets on the BSP leafs or just portal-frustum-clipping approach during runtime. Even though I have a powerful laptop, I am always suprised how high the fillrate of modern graphics cards is. I am rendering thousands of polygons and don’t see any performance penalty yet. Ok, I don’t have any complex fragment shaders (using fixed-function pipeline API currently), but anyway…
After all the coding for Day 1 i actually was a little exhausted and my brain still needed to digest some of the reading. So, I started the day of with a nice bicycle ride. That actually helped a lot. I had my moment of zen regarding Binary Space Partition on the bike: I think now i get how it will later help me for the static visibility calculations; i.e. Potential Visible Sets.
Anyway; the day again consisted of a lot of reading:
- BSP Tree generation and how it can be used for visibility calculations, lighting and physics is really nicely described in this thesis. At the moment it is my main source of information on BSP and even though it is mentioned nowhere, I cannot help but wonder how closely the concepts are to what quake is using. But no mention whatsoever…
- Some tips on BSP tree details are described in this rather simple txt file.
- And before I forget it: My first introduction to BSP tree was from Michael Abrash’s Graphics Programming Black-book. It describes the concepts in 2d, but the above mentioned paper gives the full details on how to do it in 3D with all the necessary operations on Points, Planes and Polygons to actually classify faces and cut up the world.
My first result of a BSPed world that actually looked like this:
Turning of the z-Buffer, it actually rendered the world correctly in back to front order. Reason enough the algorithm is correct. But obviously, the world was partitioned to much. All the detail architecture like stairs and boxes contributed to the world partition. Which is not what e.g. Quake 2 is doing. There is a distinction between structural and detail architecture. The BSP is generated from the structural architecture alone and then the detail architecture is justed pushed into the leaves of the BSP tree. If the choice of the split planes is good, the partitions of the world should end up in almost each convex room of the world ending up in a leave of its own.
The main problem I found out that the map editor Trenchbroom is for Quake 1 and it did not seem to have supported detailed brushes yet. So, no way to set this property. Also I was already thinking about texturing the world; and as all the map editors are inherently tied to id softwares WAD file format, I decided to switch to Blender for modelling the map. There already exists an export plugin to the .map file format and some great introductions in how to use Blender for modelling levels for this format. Check here and here.
The Blender export plugin actually also did not support detailed brushes but I knew enough about the python API to add this feature. Here a better split:
What I still wanted to do but will push until tomorrow is the algorithm and the metric for choosing a good splitting plane in the BSP creation and visualization of the AABBs of the BSP tree leaves for debugging purposes. As all that will come next relies heavily on the BSP, this might be a helpful debugging tool…
After that, tomorrow consists of collision detection and simple physics or the visibility/PVS algorithm.
I decided yesterday to finally participate in a game jam again. Not just for one/two days but the 7DFPS which lasts a whole week. I am no big fan of the First Person Shooter genre from a gameplay perspective but highly interested in the technologies used to make this visual worlds possible. With id Software’s John Carmack as one of the celeberties in the field and pusing the technologie for the past 20 years, I decided on challenging myself to have a deeper look at some of the concepts that make up the quake engine. Used by a hugh number of successfull games. Starting with id software’s in-house titles Quake 1 to 3. I will take this as a learning experience and not so much as a task to create a playable game at the end of the week. In case it will happen, I won’t be disappointed though…
The core idea behind the quake engine family (id tech 1 to 3; used in Quake 1 to Quake 3) seems to have stayed quiet constant. Mainly: Using Binary-Space-Partition (BSP) and Constructive Solid Geometry (CSG) to create the game world and define solid and empty space of the world. Querying the world database for visibility calculations, lighting calculations and physics is done via the BSP tree structure. The BSP tree itself and visibility (Potential Visibility Sets are built and attached to the BSP) and lighting (shadows/lights are baked into static textures that during runtime then just blended with the standard diffuse textures of the world) are calculated as am expensive prepossessing step. During runtime of the game, the BSP is then used mainly for rendering, physics, dynamic light calculations.
From there, the first day mainly consisted of reading articles and papers on the core technologies used by quake’s graphics engine; Binary-Space-Partition Trees (BSP Trees) and Constructive Solid Geometry:
To be able to build a BSP of the world, all the individual solid entities the world is made up of (lots of blocks) have to be merged via a CSG Union operation. This will lead to a set of solid and closed surfaces. No two polygons/faces intersect each other. I read up on some CSG approaches over at flipcode. From there, the Laidlaw paper on CSG is a good next step as it seems like this was also the main approach used within quake’s CSG tools.
From there, I started to get more practical: Let’s try to create a simple world, load it, CSG it and BSP it. Obviously, I needed a fast way to create this world, define some export format and load it. As I am on the track of Quake, I decided to use the .map file-format that is also used by all the level editors for the quake engine family. Trenchbroom was the editor of choice because it runs on windows and Mac OS. For the .map file-format itself there is a great article available describing not only the structure but also some alternative approaches for the polygon-creation and the CSG process.
I also tried to read the QBSP tool (which is part of the quake-engine’s asset creation pipeline; i.e. loaded a .map file and does the CSG, BSP, Visibility, Lighting etc) to get some help on implementing the CSG algorithm. It is really hard to read code with alot of shared state and confusing names. Fabian Sanglard’s commented repository clone helps a little to understand the interwoven codebase.
So what’s did I do expect reading? I implemented the loading of the .map file (Java + LWJGL) and that’s the result of today (as you can see in the screen shot in the top-right): A loaded .map file, CSG’ed and rendered. The colored faces show nicely how the actual source polygons for the walls and floors have been slip up into smaller regios to have one closed surface for the whole room.
Tomorrow comes the BSP.
The game is by far the most complete I have been developing lately (especially in that timeframe) but due to the time-constraints and other topics, I was not able to give it the final polish to make it as great as it could have become. For one thing, I discovered last minute that my asset-loader is broken and randomly is not able to load all textures and sounds correctly. I found that dropbox (where the game is hosted) seems to deny to many simulations connections and silently just blocks/kills them (the Chrome network tab just shows them as “pending” connections). Anyway, I was not able to give it the time to find and really fix this bug. That’s why the game randomly fails to load. You can see it from the comments.
Another thing I discovered last minute was related to the randomly generated levels; which was the most fun part to work at and definitely something I will spend more time on in the future. Sometimes the random level generation breaks; no idea why; also because I currently have no time to fix it.
So, even though “One Game a month” is a blessing for me to get something done, due to the time constraints it leads me to publish a game at the end of the month that is not the quality I wanted it to be. I know from other fellow participants that polish a game after the month and only release it then. I respect that and would love I could do so too, but the next, more interesting project, always seems to be waiting already. I guess this is just another level of “not getting things finished”.
Anyway, I would like to reflect on the game a little as I am quiet pleased with some core decisions:
- Doing art yourself takes a lot of time. That’s why this time I decided to get the art done by someone else. I decided on a sprite-pack from Oryx Design Lab. The pixel-art-style is exactly what I personally would like to be able to do myself: minimal number of pixels, but maximum character. I had to modify the sprites slightly to get some movement and walls and floors I had to create myself. So, it was a good balance between doing some art but not wasting too much of time.
- I used some sound packs from OpenGameArt.org for the sounds of sword, pistol, enemies and footsteps and it gives some great depth to the game. I espacially like the sound of the pistol. Just amazing what a poor-mans pixely pistol-sprite and a great sound can create. I will definitively spend more time on this in the future. Also the atmospheric background sound track helps a lot. Thanks to DST for the great sounds.
- I have been spending a lot of time on generating random levels as the theme for this month was “rogue-like” and randomness/replayablity is one major aspect defining this genre. I am quiet happy with my basic approach as it is very extenable and if i would have had more time, the levels would look much better then they do right now. I will describe it in a separate section below in case you are interested.
- I switched editors from Jetbrains Webstorm to Sublime Text 2. A really nice and responsive editor that works on Windows and Mac alike.
Random World Generation
The random world generation for Arkham3D uses quiet some brute force, but it turned to work just fine. If you look at the world from above, it is basically just a 2D-array of tiles with certain properties: walls, floors, hedges. And on some tiles, there are entities like monters or collectables (keys, health, ..).
The random world generation takes as an input just the width and height of the generated world and a seed for the random number generator to be able to reproduce the randomness. At some point the idea was to be able to share these seeds between friends. I.e. after you complete a world in a good time, you can mail a link to the game to a friend (containing the seed), and the friend can play the exact same level and challenge the time. This works as the randomly generated level is the same due to using the same seed. I might come back to this concept for some future game.
The algorithm for generating a random world of defined width and height is then as follows:
- Randomly place dungeons/rooms of random size (obviously there is some min/max size) as far to the top and left of the map as possible. It has to be made sure the rooms do not overlay. If a room cannot be placed in the world as there is not enough space left, the room size is decreased one unit at a time and it tried again to place the room. Once no more room can be placed, the loop terminates
- We now have a world full of disconected/closed rooms. The algorithm now looks for neighboring rooms and connects them by a portal. The connectivity information is stored in a graph structure.
- After the previous step, we have a heavily connected undirected graph describing the connectivity between room. As each room connects to each neighboring rooms (if possible and there is no portal that obstructs the creation of another portal) and thus the world is not very challenging to explore. So, the next step is to run a breadth-first-search on the graph to create a spanning tree. I.e. portals unnecessary to connect the world have been removed. The world now is still connected but there is no two ways to get from a to b. Much more like a maze to discover now.
- Now, we have a connected world/maze but the rooms all look alike. That’s why I defined certain “room-natures”. I.e. what floor and wall texture can be used together to form a room. These natures are now randomly applied to the rooms. So, one room is red-brick, one is gray brick, etc.
- Additionally, to not just have empty rectangular rooms, we apply certain templates to the rooms randomly. I.e. if a room has a certain size, randomly make it have a seperating wall or centered inner room; or place some hedge-tiles randomly in the room. This allows for more interesting room designs and more place for monsters to hide.
- We have treasures to hide. Ideally we want them to be locked behind a door that first has to be opened by a key. What the algorithm does is it randomly places the treasures in the leafs nodes of the spanning tree (the player has to discover the world before finding the treasures) and randomly walk up the spanning tree to the root. At some point place a door. Mark all child-nodes as looked by this door. So, if we place another door or key, we take a subtree that is not locked up yet. After all doors are placed, randomly place the keys in the area that is not locked up by a door. (could have made it more interesting by having chaines; i.e. a door unlocks an area with another key but I did not bother for now; worlds a are confusing enough).
- Now, only the monsters and collectables are missing. Randomly place them in the rooms based on a cost metric. I.e. a room has a certain hardness and monters have a certain difficulty. So, place random monsters in a room until the maximum difficulty for that room is reached. E.g. 3 bats or 1 blue ghost. The difficulty increases with the player getting deeper the tree.
- As a last step, we place the player spawn at the root of the spanning tree.
It was really interesting to use randomness to generate levels and not have to bother too much with level design more this game jam. You get instant replayablity if the generator is doing a good job.
I am not sure if I will be able to keep up with the pace of One Game a Month and not have something half-polished at the end of the month. Either I have to take it as what it is (a training and learning experience) or I have to set my goals lower. A full 2 1/2D game like Arkham3D is definitely nothing I will be trying to pull of in one month again.
Also, I will be focusing on development for the Oculus Rift in the next months and thus might not have time to churn out another game. It will be more of a “Research in VR” period.
For my game development hobby projects I have been using Java for most things (libGdx is great for android). It is a great language with amazing tooling/IDEs (eclipse, IntelliJ). I espacially like hot code-replacement. Espacially in a run-loop of a game this can be very handy to do rapid development and immeditely see your changes take effect. It might not be on the level of what Bret Victor envisions, but it is a first step.
Anyway, I have been wanting to try out WebGL for a long time now; last time, after finding this WebGL port of one my favorite rendering engines. The context at work now gave me enough of an understanding for the surrounding technologies to actually try it for one of my game projects. I was hoping to directly jump into the actual game content and not have to learn a new language and environment from the ground up. And it actually worked out quiet fine.
Also, I wanted to go in the direction of using the web for future games anyway; the burden to start a browser game is much lower then to either download and start a game or even run a Java-applet based game in the browser. This can be crucial for a high exposure rate of your game during a game competition like the Ludum Dare.
I decided to join the “One game a month” initiative (#1GAM) and make a February game based on HTML5 technologies. The result is a simple retro-style jump-and-run ala Commander Keen and named “Agent 386”. You can find it here.
What technologies have I been using and what are some of the take-aways?
- WebGL is just great! Instead of having to write a lot of boilerplate code just to get one triangle on the screen, in WebGL it is just so easy: Create a canvas, get it’s 3D-content and you are ready. Also the code to load a shader is quiet straight-forward if you include the shader-code in a script-block in the HTML page. As WebGL, just like OpenGL ES 2.0, does not come with a matrix stack, the glMatrix library is indispensable.
- WebAudio I only got working in Chrome as it seems Firefox does not support it yet; nevertheless, in Chrome it is working great (I first had some issues with the sound beeing distored but it went away and never came back) and you can easily load and play an Ogg-Vorbis encoded audio-file in a few lines of code. Wait until Firefox fully supports it and over are the days of flash. Well, the days of flash are already counted but a flash-based sound plugin was one of the last useful applications I could think of.
- The LocalStorage API is great for storing save-games and high scores.
- Now comes the greatest part: Gamepad API! I bought an XBox 360 game controller (the “for Windows”-version) and hocked it up to my macmini (see this controller driver to get it running). I have the mac connected to my TV running mainly XBMC as an entertainment system. But, it also has chrome on it. Perfect place to try out my jump-and-run and actual enjoy a few entertaining minutes in front of the TV (german TV is crap; I usually try not to turn on the TV; except for listening to some music television). It worked like a charm; and I imagine there is a huge potential and market for casual browser-based games in the living room. Playstation 4 has already blown it (what indie game developer can afford to publish a game on that platform?) and Microsoft might do as well with the XBox, but there still could be a new gold-rush for indies and gaming in the living room. Apple TV might be the winner here just as on the mobile market (100$ annual developer subscription and a 70/30 revenue share; that was needed to start the gold rush and is all that’s needed now). But I digress…
- Zepto.js: JQuery is a quiet heavy and big framework but without discussion it is indispensable in almost any web application. Just for the simple DOM-manipulates of a WebGL game, this might be overkill. Creating or querying a canvas (and loading the shaders from script tags) is no reason to include JQuery. Zepto.js is an implementation of the JQuery API without all the legacy browser support et al. Making it lightweight and a reasonable include.
IDE, Browser, Runtime, etc..
- I have a license for the Webstorm IDE at work. So, I naturally chose to use it for my game as well. It has a great debugger integration. And setting breakpoints in your code is a big plus. I don’t want to do this in the debugging panel of Chrome. But the missing hot code replacement is really a disadvantage. For developing with angular.js et al WebStorm comes with a great Chrome plugin to do immediately swap in code changes in the browser. I think it is similar to a page refresh with a little bit of additional stardust; because of that is is useless for me in a WebGL context. Make a code-change and basically the game is reloaded. Maybe I am also missing something; because I have been reading that you CAN do hot code-replacement in Chrome if you change your files directly in the Chrome Developer panel (“does the WebStorm plugin maybe work in a similar fashion?”, I am just now thinking…) but it did not work for me. I think I will be trying Sublime text 2 for my next project as the lightweight feeling and responsiveness might make up for the non-present debugger integration.
- I have used Chrome for the most part of development. And I only started to test in Firefox and Safari when I was already well into the development and almost 50% finished. I was surprised that the game immediately ran in both browsers without needing to do any modifications. Firefox without sound (I coded the access to WebAudio defensively as it is the one HTML5 API that the game does not have to rely on to function) but still impressive. I tested it quickly in Chrome on my android phone and it did not work out of the box. But I also did not really care as i don’t like the feel of Jump-and-Run games on a touch-screen. The game controller with haptic feedback is the homezone of these kind of games. Disabling cross-site scripting checks in the browser (–disable-web-security for Chrome) was also a big plus for loading config files and resources via Ajax and not needing a webserver for the development.
My takeaway from this journey is that I will stick with HTML5 for future projects. It has all that’s needed in a nice sandbox and the runtime environment is omnipresent environment (the browser). Everybody has a browser everywhere and can enjoy a casual game. Let’s wait for Apple-TV and Steam Box and we will have the living room gold rush; and HTML5 taking a front seat!
I also have my asset-creation and development toolchain set up and ready for bigger projects now. As I wanted to learn, I did not go for Impact.js as the game library but created the framework from scratch and myself. Sprite-creation, spritesheet conversion, audio-file conversion to ogg, etc are set up in a make-based automated process. So, compilation of all the assets, obfuscation/minification of the code and deployment to the web-server is only one “make deploy” command. For the first time I have been trying to rely on a lot of good tools that already exist instead of doing everying myself. The tools I used for Agent 386 are:
- Tiled for creating the tile-based levels (before, it was usually Paint.NET; just “draw” a map into a PNG)
- TexturePacker for creating spritesheets (commandline interface included!)
- ASEprite aside of Paint.NET for the actual creation of animated sprites (it helps to see the animation immediately instead of starting up the game and checking there)
- oggenc for converting WAV files to Ogg Vorbis
- Bfxr for sounds
The only thing left for me to say: Try out WebGL for game development and if you are in the mood, check out Agent 386 on your big-screen TV with gamepad and enjoy. For me it was certainly an interesting journey and with only 3 weekends of almost full-nighters, I am proud to have been able to get something finished.
It has been a long time since my last post. From a technical side, I have been involved a lot in researching on computer graphics and game development. Since participating in the last Ludum Dare with Shit it’s evolving I have regained my old interest from university in doing computer graphics. For this, I have been reading an implementing a lot. During the Ludum Dare I have been playing with a raycasting-based software renderer ala Wolfenstein 3D. The next step was to actually implement a full 3D renderer in software. I have mainly taken the legacy fixed-function pipeline OpenGL API and implemented the rendering underneath in software. Implementing perspective projection, clipping in homogeneous coordinates and the actual scan-conversion of the polygons all by hand, helped a lot to gain a deeper understanding of the pipeline. I will write about this in a separate post once I have time. Just as a quick hint if you want to to something similar. This book is the holy grail!
It might not look like it, but trust me: it is pure gold!
Next to that I have been reading a lot of Id software source code. I think it cannot be appreciated enough that they put this stuff up on github for everyone to play with. My main focus was on the Quake 1 sources. In combination with Fabien Sanglard’s source code review and Michael Abrash’s Graphics programming black book (available for free online) which contains some chapter on core Quake engine topics, it was a surprisingly easy read.
After working on all of these topics, I decided to get my Ludum Dare entry on the Anroid PlayStore. After doing a first port of the game to android, I quickly noticed that software rendering is to much of a burden on the CPU of a mobile device and I would need to leverage the GPU. At that point I basically threw away all I had and completely implemented the engine using OpenGL ES and libgdx (really nice framework that allows to develop android games on the desktop for the most part without actually touching a device or simulator). As everything is new and looks now much more polished now, I decided on a new title and will try to make my first steps as an indie game developer with it. It is called Boskovice and the landing page with the current progress can be found here. I plan to make to make it into a casual first person shooter for the android platform with a strong retro-feel. Even though I hate most of today’s games (haven’t played a triple-A game title in 5 years) I still like to play the old Id Software, 3D Realms and Apogee titles. This is the feel I am aiming for. Let’s see how it turns out.
If you like the idea of a casual retro-style first person shooter for android let me know in the comments. I will also be looking out for beta-testers shortly. So, if you are interested, post in the comments or hit me on twitter.
Just got asked how to export animations from within Blender. While collecting the essential snippets, I though, I might as well put it in a short post in case it can help somebody else. Note that this post assumes that you are familiar with Blender’s Python API and thus only gives the most essential information to get you started. I.e. what are the main data-paths to get animation data. More specifically, how to export the transformed vertices for every frame of the animation.
Let’s assume you have defined an animation on some object in blender. E.g. by manually setting keyframes on some object or by recording some physics-based animation with the Blender Game Engine. Let’s also assume the object is currently selected and active. The essential steps to access the transformed vertices are as follows:
# Get the active object o = bpy.context.active_object # Get index of the first and last frame of the animation (start_frame, end_frame) = o.animation_data.action.frame_range # Iterate over each frame for f in range(int(start_frame), int(end_frame) + 1): # Set the current frame of the scene bpy.context.scene.frame_set(f) # Iterate over vertices for v in o.data.vertices: # Transform each vertex with the current world-matrix print(o.matrix_world * v.co)
Note: the world-matrix of the object contains the current rotation, scaling and translation transforms due to the active frame of the animation sequence.
Obviously, this script is only to show how to get to the relevant information. In a real exporter, you might not iterator over the vertices directly, but over the faces; but I assume, you know these things and only the animation-related part is new to you.
Just one final note: If you have a lot of frames, the amount of vertex data in your final model file might get very large as you export O(numVertices * numFrames) many elements. And often animations, especially skeletal-based ones, can be exported more efficiently by only exporting the transforms to be applied to each part of the model (think torso, upper leg, lower leg, tows…). The transforms can then be nicely applied and interpolated in code with glTransform, glRotate, glScale and friends (yes, deprecated, but good to make the point) assuming you are using OpenGL for rendering.