fruitfly

a technical blog

7DFPS – Day 3: Collision Detection/Response

leave a comment »

Today I focused on getting a simple physics simulation running. The main work consisted of creating a collision detection system based on the BSP tree. I went for the well described approach from this thesis I have been using previously. I have to say that I don’t like the result yet. There are occasional slips where it seems to miss a polygon. Also slopes and stairs I did not care for yet. Jumping around the world is good enough for now. If I have time, I might go for this more solid “sweep”-based approach.

I also today implemented the split-plane algorithm/metric for the BSP tree generation. It is a very hard problem (actually it is NP-hard :-)) to find a good metric for the tree to be balanced and with minimal number of polygon splits. Currently, some rooms are completly covered in one convex leaf of the tree, but others are split because the algorithm sometimes has to favor a balanced split. I might try on improving the algorithm and add some additional weight if a split produces a convex subregion. This should then cater for all simple convex room to end up in their own leaf.

Here a screenshot of a room in the world, were each color represent one leaf in the BSP. Note how the whole room, because it’s convex, has ended up in a single leaf.

bsp_leaf

Oh; and I started on the portalizing of the world for the culling algorithm. I am not sure yet if I will be precalculation potential visibility sets on the BSP leafs or just portal-frustum-clipping approach during runtime. Even though I have a powerful laptop, I am always suprised how high the fillrate of modern graphics cards is. I am rendering thousands of polygons and don’t see any performance penalty yet. Ok, I don’t have any complex fragment shaders (using fixed-function pipeline API currently), but anyway…

Written by 38leinad

August 12, 2013 at 10:02 pm

Posted in Uncategorized

7DFPS – Day 2 – Binary Space Partition

leave a comment »

After all the coding for Day 1 i actually was a little exhausted and my brain still needed to digest some of the reading. So, I started the day of with a nice bicycle ride. That actually helped a lot. I had my moment of zen regarding Binary Space Partition on the bike: I think now i get how it will later help me for the static visibility calculations; i.e. Potential Visible Sets.

Anyway; the day again consisted of a lot of reading:

  • BSP Tree generation and how it can be used for visibility calculations, lighting and physics is really nicely described in this thesis. At the moment it is my main source of information on BSP and even though it is mentioned nowhere, I cannot help but wonder how closely the concepts are to what quake is using. But no mention whatsoever…
  • Some tips on BSP tree details are described in this rather simple txt file.
  • And before I forget it: My first introduction to BSP tree was from Michael Abrash’s Graphics Programming Black-book. It describes the concepts in 2d, but the above mentioned paper gives the full details on how to do it in 3D with all the necessary operations on Points, Planes and Polygons to actually classify faces and cut up the world.

My first result of a BSPed world that actually looked like this:

bsp_no_struct

Turning of the z-Buffer, it actually rendered the world correctly in back to front order. Reason enough the algorithm is correct. But obviously, the world was partitioned to much. All the detail architecture like stairs and boxes contributed to the world partition. Which is not what e.g. Quake 2 is doing. There is a distinction between structural and detail architecture. The BSP is generated from the structural architecture alone and then the detail architecture is justed pushed into the leaves of the BSP tree. If the choice of the split planes is good, the partitions of the world should end up in almost each convex room of the world ending up in a leave of its own.

The main problem I found out that the map editor Trenchbroom is for Quake 1 and it did not seem to have supported detailed brushes yet. So, no way to set this property. Also I was already thinking about texturing the world; and as all the map editors are inherently tied to id softwares WAD file format, I decided to switch to Blender for modelling the map. There already exists an export plugin to the .map file format and some great introductions in how to use Blender for modelling levels for this format. Check here and here.

The Blender export plugin actually also did not support detailed brushes but I knew enough about the python API to add this feature. Here a better split:

bsp_detailYou can see in the screenshot that the room is still unnecessary split. That’s because my split plane algorithm is rather naive and just takes a random face/plane.

What I still wanted to do but will push until tomorrow is the algorithm and the metric for choosing a good splitting plane in the BSP creation and visualization of the AABBs of the BSP tree leaves for debugging purposes. As all that will come next relies heavily on the BSP, this might be a helpful debugging tool…

After that, tomorrow consists of collision detection and simple physics or the visibility/PVS algorithm.

Written by 38leinad

August 11, 2013 at 8:54 pm

Posted in Uncategorized

7DFPS – Day 1 – Constructive Solid Geometry

leave a comment »

csgI decided yesterday to finally participate in a game jam again. Not just for one/two days but the 7DFPS which lasts a whole week. I am no big fan of the First Person Shooter genre from a gameplay perspective but highly interested in the technologies used to make this visual worlds possible. With id Software’s John Carmack as one of the celeberties in the field and pusing the technologie for the past 20 years, I decided on challenging myself to have a deeper look at some of the concepts that make up the quake engine. Used by a hugh number of successfull games. Starting with id software’s in-house titles Quake 1 to 3. I will take this as a learning experience and not so much as a task to create a playable game at the end of the week. In case it will happen, I won’t be disappointed though…

The core idea behind the quake engine family (id tech 1 to 3; used in Quake 1 to Quake 3) seems to have stayed quiet constant. Mainly: Using Binary-Space-Partition (BSP) and Constructive Solid Geometry (CSG) to create the game world and define solid and empty space of the world. Querying the world database for visibility calculations, lighting calculations and physics is done via the BSP tree structure. The BSP tree itself and visibility (Potential Visibility Sets are built and attached to the BSP) and lighting (shadows/lights are baked into static textures that during runtime then just blended with the standard diffuse textures of the world) are calculated as am expensive prepossessing step. During runtime of the game, the BSP is then used mainly for rendering, physics, dynamic light calculations.

You can get a light introduction to the core engine from Fabian Sanglard’s great source code reviews of Quake 1, Quake 2 and Quake 3. I did so again to refresh my understanding.

From there, the first day mainly consisted of reading articles and papers on the core technologies used by quake’s graphics engine; Binary-Space-Partition Trees (BSP Trees) and Constructive Solid Geometry:

To be able to build a BSP of the world, all the individual solid entities the world is made up of (lots of blocks) have to be merged via a CSG Union operation. This will lead to a set of solid and closed surfaces. No two polygons/faces intersect each other. I read up on some CSG approaches over at flipcode. From there, the Laidlaw paper on CSG is a good next step as it seems like this was also the main approach used within quake’s CSG tools.

From there, I started to get more practical: Let’s try to create a simple world, load it, CSG it and BSP it. Obviously, I needed a fast way to create this world, define some export format and load it. As I am on the track of Quake, I decided to use the .map file-format that is also used by all the level editors for the quake engine family. Trenchbroom was the editor of choice because it runs on windows and Mac OS. For the .map file-format itself there is a great article available describing not only the structure but also some alternative approaches for the polygon-creation and the CSG process.

I also tried to read the QBSP tool (which is part of the quake-engine’s asset creation pipeline; i.e. loaded a .map file and does the CSG, BSP, Visibility, Lighting etc) to get some help on implementing the CSG algorithm. It is really hard to read code with alot of shared state and confusing names. Fabian Sanglard’s commented repository clone helps a little to understand the interwoven codebase.

So what’s did I do expect reading? I implemented the loading of the .map file (Java + LWJGL) and that’s the result of today (as you can see in the screen shot in the top-right): A loaded .map file, CSG’ed and rendered. The colored faces show nicely how the actual source polygons for the walls and floors have been slip up into smaller regios to have one closed surface for the whole room.

Tomorrow comes the BSP.

Written by 38leinad

August 10, 2013 at 5:06 pm

Posted in Uncategorized

Arkham3D Post Mortem

with 3 comments

BGFH00LCYAAmSf6For last month’s One Game A Month competition I have been writing Arkham3D. I am not very happy with the end result even though I could be happy…. Let me explain.

The game is by far the most complete I have been developing lately (especially in that timeframe) but due to the time-constraints and other topics, I was not able to give it the final polish to make it as great as it could have become. For one thing, I discovered last minute that my asset-loader is broken and randomly is not able to load all textures and sounds correctly. I found that dropbox (where the game is hosted) seems to deny to many simulations connections and silently just blocks/kills them (the Chrome network tab just shows them as “pending” connections). Anyway, I was not able to give it the time to find and really fix this bug. That’s why the game randomly fails to load. You can see it from the comments.

Another thing I discovered last minute was related to the randomly generated levels; which was the most fun part to work at and definitely something I will spend more time on in the future. Sometimes the random level generation breaks; no idea why; also because I currently have no time to fix it.

So, even though “One Game a month” is a blessing for me to get something done, due to the time constraints it leads me to publish a game at the end of the month that is not the quality I wanted it to be. I know from other fellow participants that polish a game after the month and only release it then. I respect that and would love I could do so too, but the next, more interesting project, always seems to be waiting already. I guess this is just another level of “not getting things finished”.

Anyway, I would like to reflect on the game a little as I am quiet pleased with some core decisions:

  • Doing art yourself takes a lot of time. That’s why this time I decided to get the art done by someone else. I decided on a sprite-pack from Oryx Design Lab. The pixel-art-style is exactly what I personally would like to be able to do myself: minimal number of pixels, but maximum character. I had to modify the sprites slightly to get some movement and walls and floors I had to create myself. So, it was a good balance between doing some art but not wasting too much of time.
  • I used some sound packs from OpenGameArt.org for the sounds of sword, pistol, enemies and footsteps and it gives some great depth to the game. I espacially like the sound of the pistol. Just amazing what a poor-mans pixely pistol-sprite and a great sound can create. I will definitively spend more time on this in the future. Also the atmospheric background sound track helps a lot. Thanks to DST for the great sounds.
  • I have been spending a lot of time on generating random levels as the theme for this month was “rogue-like” and randomness/replayablity is one major aspect defining this genre. I am quiet happy with my basic approach as it is very extenable and if i would have had more time, the levels would look much better then they do right now. I will describe it in a separate section below in case you are interested.
  • As the level generation is quiet brute-force and thus takes 2-3 seconds for a large world, I decided to use web workers for the first time so the browser does not lock up. If you don’t know it; it basically introduced a poor-man’s multi-threading concept to Javascript. One major limitation is that you only have messaging via strings. I.e. if you want to pass objects, you have to convert them in a JSON-string send it over and then “eval” it gain. For my purposes though, it fitted perfectly as the output of the world generator is a JSON-file as also generated by the Tiled editor.
    I also liked the include directives a lot: A web worker thread is basically spawned with a JS-file that should be executed. You don’t have a HTML-file with a lot of javascript included  Thus, you need another way to import dependent javascript files. The include directly similar to C/C++ is introduced for this. I really like it and would have hoped that it is taken over to regular javascript on the main thread as well…
  • I switched editors from Jetbrains Webstorm to Sublime Text 2. A really nice and responsive editor that works on Windows and Mac alike.

Random World Generation

The random world generation for Arkham3D uses quiet some brute force, but it turned to work just fine. If you look at the world from above, it is basically just a 2D-array of tiles with certain properties: walls, floors, hedges. And on some tiles, there are entities like monters or collectables (keys, health, ..).

The random world generation takes as an input just the width and height of the generated world and a seed for the random number generator to be able to reproduce the randomness. At some point the idea was to be able to share these seeds between friends. I.e. after you complete a world in a good time, you can mail a link to the game to a friend (containing the seed), and the friend can play the exact same level and challenge the time. This works as the randomly generated level is the same due to using the same seed. I might come back to this concept for some future game.

The algorithm for generating a random world of defined width and height is then as follows:

  • Randomly place dungeons/rooms of random size (obviously there is some min/max size) as far to the top and left of the map as possible. It has to be made sure the rooms do not overlay. If a room cannot be placed in the world as there is not enough space left, the room size is decreased one unit at a time and it tried again to place the room. Once no more room can be placed, the loop terminates
  • We now have a world full of disconected/closed rooms. The algorithm now looks for neighboring rooms and connects them by a portal. The connectivity information is stored in a graph structure.
  • After the previous step, we have a heavily connected undirected graph describing the connectivity between room. As each room connects to each neighboring rooms (if possible and there is no portal that obstructs the creation of another portal) and thus the world is not very challenging to explore. So, the next step is to run a breadth-first-search on the graph to create a spanning tree. I.e. portals unnecessary to connect the world have been removed. The world now is still connected but there is no two ways to get from a to b. Much more like a maze to discover now.
  • Now, we have a connected world/maze but the rooms all look alike. That’s why I defined certain “room-natures”. I.e. what floor and wall texture can be used together to form a room. These natures are now randomly applied to the rooms. So, one room is red-brick, one is gray brick, etc.
  • Additionally, to not just have empty rectangular rooms, we apply certain templates to the rooms randomly. I.e. if a room has a certain size, randomly make it have a seperating wall or centered inner room; or place some hedge-tiles randomly in the room. This allows for more interesting room designs and more place for monsters to hide.
  • We have treasures to hide. Ideally we want them to be locked behind a door that first has to be opened by a key. What the algorithm does is it randomly places the treasures in the leafs nodes of the spanning tree (the player has to discover the world before finding the treasures) and randomly walk up the spanning tree to the root. At some point place a door. Mark all child-nodes as looked by this door. So, if we place another door or key, we take a subtree that is not locked up yet. After all doors are placed, randomly place the keys in the area that is not locked up by a door. (could have made it more interesting by having chaines; i.e. a door unlocks an area with another key but I did not bother for now; worlds a are confusing enough).
  • Now, only the monsters and collectables are missing. Randomly place them in the rooms based on a cost metric. I.e. a room has a certain hardness and monters have a certain difficulty. So, place random monsters in a room until the maximum difficulty for that room is reached. E.g. 3 bats or 1 blue ghost. The difficulty increases with the player getting deeper the tree.
  • As a last step, we place the player spawn at the root of the spanning tree.
Untitled

A randomly generated world with the overlaid spanning tree in white.

It was really interesting to use randomness to generate levels and not have to bother too much with level design more this game jam. You get instant replayablity if the generator is doing a good job.

Conclusion

I am not sure if I will be able to keep up with the pace of One Game a Month and not have something half-polished at the end of the month. Either I have to take it as what it is (a training and learning experience) or I have to set my goals lower. A full 2 1/2D game like Arkham3D is definitely nothing I will be trying to pull of in one month again.

Also, I will be focusing on development for the Oculus Rift in the next months and thus might not have time to churn out another game. It will be more of a “Research in VR” period.

Written by 38leinad

April 7, 2013 at 12:16 pm

Posted in postmortem

Tagged with

HTML5 Game Development

with one comment

20130226_233917After 2012 being “the year of the cloud”, HTML5 is currently catching up to be the new buzzword of what “we need to have in our application”. So, at work I have been doing a lot of research on technologies revolving around HTML5 and JavaScript. In my industry sector most HTML/JavaScript developers still code in a procedural fashion, produce interwoven JavaScript and HTML-code and use alert for debugging. I am finger pointing at myself here too. While doing research though, I have become a big fan of JavaScript for it’s good parts. Functions as first-class objects and closures to only name two. Also, frameworks like angular.js teach you to code in a very structure fashion that has been common at the server-side for years, but now is also finding it’s way into the browser (the buzzword is dependency-injection).

For my game development hobby projects I have been using Java for most things (libGdx is great for android). It is a great language with amazing tooling/IDEs (eclipse, IntelliJ). I espacially like hot code-replacement. Espacially in a run-loop of a game this can be very handy to do rapid development and immeditely see your changes take effect. It might not be on the level of what Bret Victor envisions, but it is a first step.

Anyway, I have been wanting to try out WebGL for a long time now; last time, after finding this WebGL port of one my favorite rendering engines. The context at work now gave me enough of an understanding for the surrounding technologies to actually try it for one of my game projects. I was hoping to directly jump into the actual game content and not have to learn a new language and environment from the ground up. And it actually worked out quiet fine.

Also, I wanted to go in the direction of using the web for future games anyway; the burden to start a browser game is much lower then to either download and start a game or even run a Java-applet based game in the browser. This can be crucial for a high exposure rate of your game during a game competition like the Ludum Dare.

I decided to join the “One game a month” initiative (#1GAM) and make a February game based on HTML5 technologies. The result is a simple retro-style jump-and-run ala Commander Keen and named “Agent 386”. You can find it here.

What technologies have I been using and what are some of the take-aways?

JavaScript

  • On the one hand JavaScript is missing the classical inheritance model in an easily useable fashion and instead exposes a prototypical inheritance model, it is so powerful, that you can define the classical model yourself. I picked up the proposal by John Resig and basically copied the code over in my project. Voila: Classical Inheritance with “extends” and super-calls. This somehow reminds me of the amazing strength of Lisp; …just that it is actually widely used outside of academia.
  • I have become to learn that for me the function-scope of local variables (instead of proper block-scope) paired with closures is a really huge stepping stone and one of the baddest parts of Javascript. I have been making this mistake over and over again and really dislike the omnipresent “var self = this” solution.
  • WebGL is just great! Instead of having to write a lot of boilerplate code just to get one triangle on the screen, in WebGL it is just so easy: Create a canvas, get it’s 3D-content and you are ready. Also the code to load a shader is quiet straight-forward if you include the shader-code in a script-block in the HTML page. As WebGL, just like OpenGL ES 2.0, does not come with a matrix stack, the glMatrix library is indispensable.
  • WebAudio I only got working in Chrome as it seems Firefox does not support it yet; nevertheless, in Chrome it is working great (I first had some issues with the sound beeing distored but it went away and never came back) and you can easily load and play an Ogg-Vorbis encoded audio-file in a few lines of code. Wait until Firefox fully supports it and over are the days of flash. Well, the days of flash are already counted but a flash-based sound plugin was one of the last useful applications I could think of.
  • The LocalStorage API is great for storing save-games and high scores.
  • Now comes the greatest part: Gamepad API! I bought an XBox 360 game controller (the “for Windows”-version) and hocked it up to my macmini (see this controller driver to get it running). I have the mac connected to my TV running mainly XBMC as an entertainment system. But, it also has chrome on it. Perfect place to try out my jump-and-run and actual enjoy a few entertaining minutes in front of the TV (german TV is crap; I usually try not to turn on the TV; except for listening to some music television). It worked like a charm; and I imagine there is a huge potential and market for casual browser-based games in the living room. Playstation 4 has already blown it (what indie game developer can afford to publish a game on that platform?) and Microsoft might do as well with the XBox, but there still could be a new gold-rush for indies and gaming in the living room. Apple TV might be the winner here just as on the mobile market (100$ annual developer subscription and  a 70/30 revenue share; that was needed to start the gold rush and is all that’s needed now). But I digress…
  • Require.js: I have started to do some experimenting with it at work and liked to not have all my scripts/classes in the global scope and be a little more structured. No long lists of script-tags in your HTML-file. Instead, state the dependencies of each JavaScript file locally and let require.js take care of resolving the dependencies. Kind of like imports in Java and includes in C; even more. I was hoping that the require-calls will also function as an easy visual clue for the dependencies each of my JavaScript classes has. But, due to the dynamic typing it happens to be that a lot of classes might not be listed as a “require”. I.e. types of function parameters that flow into my methods don’t need to be listed; only if I instantiate a type or reference static attributes, I have to include a require-call. Actually the problem is no real problem as it is part of the nature of JavaScript; but I took it as a point anyway because by this require.js basically fails to be useful in a game project of the scale I am using. Async script loading is nice but rarely needed. Just list all your scripts and let them get loaded in the beginning and you don’t have to care about script loading anymore. And managing the dependency-graph manually is also doable if you have a good architecture. It works for Quake as well…
  • Zepto.js: JQuery is a quiet heavy and big framework but without discussion it is indispensable in almost any web application. Just for the simple DOM-manipulates of a WebGL game, this might be overkill. Creating or querying a canvas (and loading the shaders from script tags) is no reason to include JQuery. Zepto.js is an implementation of the JQuery API without all the legacy browser support et al. Making it lightweight and a reasonable include.
  • Duck-typing: This is one thing of Javascript I really started to appreciate. I have never been doing a one-man game-project in a scripting language before so this might be obvious to scripting language users in general, but duck-typing is just great. I can define a function “distance(a,b)” do calculate the distance of two points and all “a” and “b” need to define is parameters “x,y,z”. The type is irrelevant. I can calculate the distance between the player and some other game object with having an explicit Vec3 type or similar; which I find overkill in smaller projects. Especially in a game jam.

IDE, Browser, Runtime, etc..

  • I have a license for the Webstorm IDE at work. So, I naturally chose to use it for my game as well. It has a great debugger integration. And setting breakpoints in your code is a big plus. I don’t want to do this in the debugging panel of Chrome. But the missing hot code replacement is really a disadvantage. For developing with angular.js et al WebStorm comes with a great Chrome plugin to do immediately swap in  code changes in the browser. I think it is similar to a page refresh with a little bit of additional stardust; because of that is is useless for me in a WebGL context. Make a code-change and basically the game is reloaded. Maybe I am also missing something; because I have been reading that you CAN do hot code-replacement in Chrome if you change your files directly in the Chrome Developer panel (“does the WebStorm plugin maybe work in a similar fashion?”, I am just now thinking…) but it did not work for me. I think I will be trying Sublime text 2 for my next project as the lightweight feeling and responsiveness might make up for the non-present debugger integration.
  • I have used Chrome for the most part of development. And I only started to test in Firefox and Safari when I was already well into the development and almost 50% finished. I was surprised that the game immediately ran in both browsers without needing to do any modifications. Firefox without sound (I coded the access to WebAudio defensively as it is the one HTML5 API that the game does not have to rely on to function) but still impressive. I tested it quickly in Chrome on my android phone and it did not work out of the box. But I also did not really care as i don’t like the feel of Jump-and-Run games on a touch-screen. The game controller with haptic feedback is the homezone of these kind of games. Disabling cross-site scripting checks in the browser (–disable-web-security for Chrome) was also a big plus for loading config files and resources via Ajax and not needing a webserver for the development.

My takeaway from this journey is that I will stick with HTML5 for future projects. It has all that’s needed in a nice sandbox and the runtime environment  is omnipresent environment (the browser). Everybody has a browser everywhere and can enjoy a casual game. Let’s wait for Apple-TV and Steam Box and we will have the living room gold rush; and HTML5 taking a front seat!

I also have my asset-creation and development toolchain set up and ready for bigger projects now. As I wanted to learn, I did not go for Impact.js as the game library but created the framework from scratch and myself. Sprite-creation, spritesheet conversion, audio-file conversion to ogg, etc are set up in a make-based automated process. So, compilation of all the assets, obfuscation/minification of the code and deployment to the web-server is only one “make deploy” command. For the first time I have been trying to rely on a lot of good tools that already exist instead of doing everying myself. The tools I used for Agent 386 are:

  • Tiled for creating the tile-based levels (before, it was usually Paint.NET; just “draw” a map into a PNG)
  • TexturePacker for creating spritesheets (commandline interface included!)
  • ASEprite aside of Paint.NET for the actual creation of animated sprites (it helps to see the animation immediately instead of starting up the game and checking there)
  • oggenc for converting WAV files to Ogg Vorbis
  • Bfxr for sounds

The only thing left for me to say: Try out WebGL for game development and if you are in the mood, check out Agent 386 on your big-screen TV with gamepad and enjoy. For me it was certainly an interesting journey and with only 3 weekends of almost full-nighters, I am proud to have been able to get something finished.

Written by 38leinad

February 28, 2013 at 11:42 pm

Posted in Uncategorized

Heartbeat

with one comment

It has been a long time since my last post. From a technical side, I have been involved a lot in researching on computer graphics and game development. Since participating in the last Ludum Dare with Shit it’s evolving I have regained my old interest from university in doing computer graphics. For this, I have been reading an implementing a lot. During the Ludum Dare I have been playing with a raycasting-based software renderer ala Wolfenstein 3D. The next step was to actually implement a full 3D renderer in software. I have mainly taken the legacy fixed-function pipeline OpenGL API and implemented the rendering underneath in software. Implementing perspective projection, clipping in homogeneous coordinates and the actual scan-conversion of the polygons all by hand, helped a lot to gain a deeper understanding of the pipeline. I will write about this in a separate post once I have time.  Just as a quick hint if you want to to something similar. This book is the holy grail!

It might not look like it, but trust me: it is pure gold!

Next to that I have been reading a lot of Id software source code. I think it cannot be appreciated enough that they put this stuff up on github for everyone to play with. My main focus was on the Quake 1 sources. In combination with Fabien Sanglard’s source code review and Michael Abrash’s Graphics programming black book (available for free online) which contains some chapter on core Quake engine topics, it was a surprisingly easy read.

After working on all of these topics, I decided to get my Ludum Dare entry on the Anroid PlayStore. After doing a first port of the game to android, I quickly noticed that software rendering is to much of a burden on the CPU of a mobile device and I would need to leverage the GPU. At that point I basically threw away all I had and completely implemented the engine using OpenGL ES and libgdx (really nice framework that allows to develop android games on the desktop for the most part without actually touching a device or simulator). As everything is new and looks now much more polished now, I decided on a new title and will try to make my first steps as an indie game developer with it. It is called Boskovice and the landing page with the current progress can be found here. I plan to make to make it into a casual first person shooter for the android platform with a strong retro-feel. Even though I hate most of today’s games (haven’t played a triple-A game title in 5 years) I still like to play the old Id Software, 3D Realms and Apogee titles. This is the feel I am aiming for. Let’s see how it turns out.

If you like the idea of a casual retro-style first person shooter for android let me know in the comments. I will also be looking out for beta-testers shortly. So, if you are interested, post in the comments or hit me on twitter.

Written by 38leinad

November 19, 2012 at 9:27 pm

Posted in android, graphics, opengl, Uncategorized

Tagged with

Blender: Exporting Keyframe-based Animations

with 2 comments

Just got asked how to export animations from within Blender. While collecting the essential snippets, I though, I might as well put it in a short post in case it can help somebody else. Note that this post assumes that you are familiar with Blender’s Python API and thus only gives the most essential information to get you started. I.e. what are the main data-paths to get animation data. More specifically, how to export the transformed vertices for every frame of the animation.

Let’s assume you have defined an animation on some object in blender. E.g. by manually setting keyframes on some object or by recording some physics-based animation with the Blender Game Engine. Let’s also assume the object is currently selected and active. The essential steps to access the transformed vertices are as follows:

# Get the active object
o = bpy.context.active_object
# Get index of the first and last frame of the animation
(start_frame, end_frame) = o.animation_data.action.frame_range
# Iterate over each frame
for f in range(int(start_frame), int(end_frame) + 1):
  # Set the current frame of the scene
  bpy.context.scene.frame_set(f)
  # Iterate over vertices
  for v in o.data.vertices:
    # Transform each vertex with the current world-matrix
    print(o.matrix_world * v.co)

Note: the world-matrix of the object contains the current rotation, scaling and translation transforms due to the active frame of the animation sequence.

Obviously, this script is only to show how to get to the relevant information. In a real exporter, you might not iterator over the vertices directly, but over the faces; but I assume, you know these things and only the animation-related part is new to you.

Just one final note: If you have a lot of frames, the amount of vertex data in your final model file might get very large as you export O(numVertices * numFrames) many elements. And often animations, especially skeletal-based ones, can be exported more efficiently by only exporting the transforms to be applied to each part of the model (think torso, upper leg, lower leg, tows…). The transforms can then be nicely applied  and interpolated in code with glTransform, glRotate, glScale and friends (yes, deprecated, but good to make the point) assuming you are using OpenGL for rendering.

Written by 38leinad

June 4, 2012 at 7:43 pm

Posted in blender

Blender 2.6: Exporting UV Texture Coordinates

with 2 comments

There is nothing I admire more then a software development team that is not shy of reorganizing an API to the better even though it means stepping on some peoples feet. Well, I guess the Blender team stepped on my foot; because one of my export scripts was no longer working once I upgraded to Blender 2.6. Actually, I should have known; because it is more the rule then the exception that the Python API changes from release to release…

This time it took me some hours to find how to get to the UV texture coordinates of a mesh. After browsing through the API via the python console for some time I took the easy way out and peaked into the .OBJ model exporter coming with blender. So, here is how you get to the UV textures of a triangle for your mesh:

m = bpy.context.active_object.to_mesh(bpy.context.scene, True, 'PREVIEW')
uv1 = m.tessface_uv_textures.active.data[0].uv1
uv2 = m.tessface_uv_textures.active.data[0].uv2
uv3 = m.tessface_uv_textures.active.data[0].uv3

The part of the data path that has changed (at least I was able to get to the UV coordindates differently in the past) is the tessface_uv_textures property.

Note that i am accessing index [0] here. The array indices correspond to the indices of the faces-array. So, this example is for face zero. Also note that the uv1 to uvx depend on how many vertices the face has. Often you would first triangulate the mesh to have only primitive polygons.

I hope t helps someone because also google mostly shows up with results for accessing the UV coordinates that are deprecated for some time by now and thus are no longer working.

Written by 38leinad

May 29, 2012 at 8:51 pm

Posted in blender

Tagged with

Gameboy Development on Mac OS X

with 2 comments

309

1

357294As I have gone more and more low-level over the past month, I was searching for a platform that is well understood/documented, not too fancy and thus allows me to use it for learning on different topics at once in a fun way. I.e. micro-processors/electrical-engineering, emulators, dev-tools.

Well, I found the old Gameboy to match that category:

  • Electrical Engineering: Building own game cartridges/ROMs, ROM-readers/writers and play around with other interfacing possibilities. As it happens, I still have my old Gameboy lying around… somewhere…
  • Emulators: Write a simple emulator for the Gameboy’s Z80 processor. Compared to others, the instruction of the Z80 is “quiet” small. Except for the CHIP-8 language, there are not many other processor that are this popular and have an as small instruction set.
  • Dev-Tools: Understanding how compiler and linker for this platform work and tinkering with it. As one of the most popular tool-chains (Rednex Gameboy Development System358295) is open-source and a comparably small project, this is hopefully not a month-long endeavor.

And I like doing game-development for fun anyway. With a quiet old and restricted platform like the Gameboy, this is a nice thing to do in with limited time and good results. Building only a simple game for PC or any of the latest Mobile-platforms (iOS, Android) can take month when you also have some sense for art and style. With a four-color 160×144 pixel display, you are so restricted on itself, that doing the graphics yourself is quiet easy; even for someone that is not a great artist. I was surprised what can be achieved on a 14 hour flight. Maybe I will even enter the next Ludum Dare359296 with a Gameboy game.

The first step required to dive into the world of Gameboy development is to set up the development tools, emulator and some more useful tools like tile-editors on your Mac. The main purpose of this post is to describe the steps I have taken. And as it was more then just click-click-finish at some stages, I hope this will also be helpful to others.

Development Kit

RGBDS360297 seems to be a popular set of developer tools for the original Gameboy. It contains four command-line tool for which two are the assembler and the linker. Unfortunately, there is no binary available for Mac OS. Fortunately, it is open source. With minor modifications I was able to compile the tools for Mac OS X Lion. You can find the compiled binaries (rgbds.tar361298) and the Makefile I used in my github repository362299.

Emulators

Next thing I needed for development was an emulator with an embeded deugger. Who writes a working game with closed eyes, loads it onto a gameboy cartidge and it works? Well, at least not me.

The emulators I found to be the best matches are no$gmb363300 and bgb364301. Both are similar feature wise; bgb seems to have been implemented based on no$gmb, so both interfaces look alike and also shortcuts in the debugger are mostly the same. bgb is still actively developed and you can also get tips by the developer in the EFNet #gbdev IRC channel.

Both emulators are for windows originally and they are not open source. So, using Wine was the only real option. If you don’t have Wine yet installed, you might want to download MacPorts365302. It is a package manager for Mac OS and also allows to easily install said software. As you also need to have the X11 window server installed for Wine to work properly, best is to follow these instructions366303 for the whole installation.

The Installation of wine itself is straightforward: After the installing MacPorts type “sudo port” in the Terminal.app to start the MacPorts package manager. Supply your user’s password and the prompt of the package manager should appear. Type “install wine” and do something else for the next half hour. This will download all dependencies (a lot) and install Wine on your system. I got an error like

:info:build Assembler messages:
:info:build Fatal error: invalid listing option `r'
:info:build winebuild: /opt/local/bin/gas -arch i386 failed with status 256

while installing Wine because it seemed to interfere with other packages I have already had installed. If you also encounter this problem, follow the workaround at the bottom of this bug-ticket367304:

sudo port clean wine
sudo port -f deactivate binutils
sudo port install wine
sudo port activate binutils

Once you have installed wine successfully, you can start the emulator. I was not able to get bgb running yet but no$gmb works just fine. Get the 32-bit Windows version from here368305 and start it with “wine NO$GMB.EXE” from the commandline.

Tools

As said before, a quiet restricted platform like the Gameboy allows also a developer without a hand for art to create nice games. Still, you might want some tools to assist in the process. The Gameboy Tile Designer369306 and the Gameboy Map Builder370307 are two such tools. Easily generate tiles and maps and directly export them into an assembler-file containing the required data to directly work with the tiles/maps. These tools are again windows-only but work just fine with wine.

Unfortunately, as of this writing, the host for these tools seems to be shutting down its operation completely, so I hope the owner of these tools will find another host soon.

Getting Started

That’s about what you need if you want to get into Gameboy development and have some retro-fun. To point you in the right direction for getting started I can recommend the notes of the Wichita State University’s 2008 Z80 Assembler Programming lecture371308. Luckily, the exact same tools we just installed on our system are used 🙂

Written by 38leinad

March 3, 2012 at 1:57 pm

Posted in gameboy, macos

Practical Blender with GLKit – Part 2 – Blender Scripting with Python

with 12 comments

Welcome to the second part of this practical series on Blender for the use with GLKit-based iOS-applications. The first part focused on setting up a basic GLKit Xcode-project as the prerequisite for the following parts. As it is a quiet new API, I think it was worth it. I also tried to give some links and tipps for learning the basics of Blender modeling.

Within this part, we will directly jump into the Python API of Blender. So, for this part I assumed you already have tinkered with Blender, done some basic modeling and find your way around the UI.  I will also assume you have some basic knowledge of the Python scripting language. If not, I recommend googling for “python in 10 minutes” . We will only be using the most basic features of the language.

In contrast to my initial plans, this is a rather lengthy post as we have to introduce some basics, will go the full way to already write a very simple export-functionality from Blender  and also write corresponding import-functionality for our iOS application, respectively. My feeling while writing was that it wraps things up much more nicely if we actually have some kind of result in our iOS application to show of at the end of this part.

Introduction

Depending on how close a look at Blender you had already, you might actually know that it comes with its own python runtime in the application bundle. The whole application is written in python (correct me if I am wrong) including all operation and actions that you can trigger via the user-interface. If you have a look into the application-bundle, you will find the subfolder /Applications/blender.app/Contents/MacOS/2.59/scripts/ containing lots of python script-files. “addons” contains a lot of export scripts for different kind of model formats. This is a very rich stash of code snippets you should be able to understand after this post.

When it comes to writing and executing your own Python scripts and tinker with the API you basically have three different options:

  1. Start Blender with a Python script for execution.
  2. Write a python script within the integrated Text Editor of Blender and execute it with the push of a button.
  3. Use the interactive Python command-line to write and execute a script line-by-line.
Up until now I have never used approach 1 because I am still in the learning phase. This makes sense for a bulk operation on many blender-model files though, I assume. Approach 2 we will use later (You can still use an external editor to write the script) to execute a whole script of operations on our Blender model. Approach 3 is really the one that makes exploring the API very easy. You can step-by-step explore operations, see the changes to your model and copy the lines into a full script-file if the result was as expected.

Python Console

Lets have a close look at approach 3 and explore data-structures and operations via the powerful Python Console. Open up this Blender model of the apple-logo. If you still have the default interface-layout, select the button in the lower-left corner of the window that should show the animation time-line. Change it to the Python Console like in the following picture.

Change one of the subviews to the interactive “Python Console”

This is the interactive Python Console/REPL; just like the regular Python Console. Read the blue output: Pressing Up/Down to go back and forth in the command-history is not a so special feature, but Auto-completion is really the killer-feature to quickly discover the API. Have a try and type:
>>> bpy.
and hit Ctrl-Space. You should see a list of possible sub-modules you can expand the command with. Type
>>> bpy.context.
and hit Ctrl-Space again. You see a list of all methods and properties you can call on the context. Make it
>>> bpy.context.scene.objects
bpy.data.scenes["Scene"].objects 
and press Enter. This prints the list-object containing all objects in the scene. Unfortunately, it is no native list so it does not actually list its contents. Go one back in the command history to display the command again and enclose it in a list-call: 
>>> list(bpy.context.scene.objects)
[bpy.data.objects["Point"], bpy.data.objects["Apple"], bpy.data.objects["Camera"]]
If you execute this, you should see a list of the objects in the scene. For me, the scene consists of three objects named “Point”, “Apple” and “Camera”. The first is a light-source, the second our apple-logo model and the last is the camera object in the scene.
But what if you don’t know this ahead and need to find out in one of your scripts what type an object is (for a simple model exporter that we will write later, we are only interest in exporting the actual model)? Well, we can try to ask for the type. Let’s do it the Python-way:
>>> for o in bpy.context.scene.objects: type(o) 
... <Press Enter>
<class 'bpy_types.Object'>
<class 'bpy_types.Object'>
<class 'bpy_types.Object'>

Ok, so obviously, many objects are represented by the type bpy_types.Object in Blender. But there is also a property “type” on bpy_type.Object (which you can find easily with the auto-completion feature). Lets try it:

>>> for o in bpy.context.scene.objects: o.type
... 
'LAMP'
'MESH'
'CAMERA'

So, if we are interest in the camera, we can simply check for the type-property of the object. Actually, we do not bother so much for the camera (and the light), but we want to work with the apple-logo which is of type “MESH”; basically representing a set of polygons. Assign it to a variable by executing this:

>>> apple = bpy.data.objects["Apple"]

We will use this variable soon when we are going to explore what we can do with our model. But first, let’s get some general introduction to the different modules of the API.

Common Modules

“bpy” is the main module of the python API. It contains most other blender-related sub-modules to access and interact with almost everything you can also do with a press of a button or other interface element in the UI. The most important modules are:

The context – bpy.context

This basically gives you access to the current state of the blender UI and is necessary to understand as we will use our scripts like Macros you might know from other application. If we switch a mode, we will actually see a change in the Blender UI; this is represented in the context.

  • bpy.context.mode: What is the current mode of Blender? When you work with your models and are a beginner like me, you are mostly either in Object- or Edit-mode. So, if you access this property, you should get “OBJECT” or “EDIT” as return-value.
  • bpy.context.active_object: What object is currently active? To be truthful, I haven’t found much use for this property yet, as I mostly only have one object selected at any time; in that case, the active-object is also the selected object. Which you can query with the next property.
  • bpy.context.selected_objects: Which objects are currently selected? Select/deselect the apple-logo or the camera and see as the property changes.
  • bpy.context.scene: We already played with this before. It gives you access to all parts of the scene. So, whatever you see in the Outliner-view, you can access it from here.

The Outliner-view

The data – bpy.data

This submodule gives you access to all data that is part of the currently opened Blender-file (.blend). This might be a super-set of what is available through the context as you might create objects programmatically that do not show up in the scene. If you try it for now, you should actually get the same list of objects as in the scene:

>>> list(bpy.data.objects)
[bpy.data.objects["Apple"], bpy.data.objects["Camera"], bpy.data.objects["Point"]]

When we start to make copies of objects to work on, this list will temporarily contain more objects.

The operations – bpy.ops

If you have been playing around yourself with the above introduced property “bpy.context.mode” you might have noticed that it is not possible to set it. It is read-only and changing it requires to call a corresponding operation from the “bpy.ops” module. In specific, the operations are grouped within submodules again. To operate on an object (bpy_types.Object), check for “bpy.ops.object”, for operations on a mesh check “bpy.ops.mesh”. We will later see how easy it is to find the Python API call for a button/element in the Blender UI. But, let’s play around with some basic operations first: 

>>> bpy.ops.object.mode_set(mode='EDIT')
{'FINISHED'}
>>> bpy.ops.object.mode_set(mode='OBJECT')
{'FINISHED'}

If you were in Object-mode before executing these commands, you should have seen that the Blender UI switched to Edit-mode and with the second command back to Object-mode. This is an important command, because you only can perform certain operations in Edit-mode, just like you are only able to do certain things in the UI when you are in a specific mode. It also depends on the object currently selected what modes are available. Select the camera-object via the UI (right mouse-click) and see that the call which worked before now fails:

>>> bpy.ops.object.mode_set(mode='EDIT')
Traceback (most recent call last):
 File "<blender_console>", line 1, in <module>
 File "/Applications/blender.app/Contents/MacOS/2.59/scripts/modules/bpy/ops.py", line 179, in __call__
 ret = op_call(self.idname_py(), None, kw)
TypeError: Converting py args to operator properties: enum "EDIT" not found in ('OBJECT')

Scripting Workflow

We will see that whole experience of Python scripting in Blender is very beginner-friendly; and if you follow a specific workflow you can very easily develop rather complex scripts in no time.

I will show you this workflow based on a simple example. We now want to do one very common operation that we usually will need to export a model for  use with OpenGL ES: Triangulation (OpenGL ES only allows us to draw triangles and not quads like in standard OpenGL). It is a concret example but might also help you in the future for a lot other problems:

  1. Do your steps manually: Find out what you want to do with the help of the UI. I.e. Select the apple-logo model, switch into Edit-mode, select all vertices (Select -> Deselect/Select All) and triangulate via Mesh -> Faces -> Quads to Tris. This should triangulate our model (Actually, most part of the apple is already triangulated; only the outer rim is quads). If that worked, undo the steps. If you know you can do it in the UI, you for sure can do it also easily via the Python API.

    Select all vertices with Select -> Select/Deselect All

    Triangulate via Mesh -> Faces -> Quads to Tris

  2. Do it step-by-step in the Python-console: The manual steps worked fine, so let’s try it step by step in the Python console again. Some steps we have actually already done before:
    >>> apple.select = True
    >>> bpy.ops.object.mode_set(mode='EDIT')
    {'FINISHED'}
    But now: How to select all vertices and how to call the "Quads to Tris" operation? Well, when you did the steps manually, you might have seen something in Blender that is quiet neat. While hovering over an operation in the menu, you see the Python API call that lies behind it (see the last two screenshots). For "Select/Deselect All", bpy.ops.mesh.select_all is shown.
    >>> bpy.ops.mesh.select_all( <Press Ctrl-Space>
     select_all()
    bpy.ops.mesh.select_all(action='TOGGLE')

    You see, that I first pressed Ctrl-Space after the function-name so I see the doc-string and thus know which parameters are required for the function. I noticed that it says “action=’TOGGLE'” in the example. That sounds a little error-prone, because it might actually deselect everything if we have already some vertices selected before this operation (for whatever reason). Let’s maybe look this one up in the API reference to make sure we do not deselect but select all: Go to Help -> API Reference in the Info-view. A browser should open up with the documentation. Click on Operators (bpy.types) and further click on Mesh Operators on the next page. We see that ‘SELECT’ is the action that we want.

    >>> bpy.ops.mesh.select_all(action='SELECT')
    {'FINISHED'}

    For our last step, the actual triangulation, we can also see from the menu (see screenshot above) that the Python-call is bpy.ops.mesh.quads_to_tris:

    >>> bpy.ops.mesh.quads_convert_to_tris()
    {'FINISHED'}
    We did it. And it was not so hard because of this unbelieavably neat feature of giving us hints on the Python API directly within the UI. This works for many different things. E.g. if you are in Object-mode and hover over one of the XYZ-sliders in the Transform/Location-section it shows "Object.location". So, we directly know to which property of the Object-type it maps to: For our model, it is apple.location (actually there is something more to it to actually apply the transform to our underlying mesh, but we will see this in the next part of the series).

    Hovering over interface elements will give you a hint on the underlying Python API call

    I just cannot stop pointing out how neat this feature is 🙂

    Now, we just have to package it up into one Python script which we can execute as a whole; which is the next step.

  3. Create a script of the individual steps/commands from step 2. Basically, you can paste the individual commands from step 2 into a text-file line-by-line to get a very static script. Enrich it with some variable/if-then-else-cases and you are done.

You should have gotten the general idea of the three-step approach. Step 3 was only theory, though. We will now see how it can be done in practice with the Python Text Editor.

Python Text Editor

Take all the steps we have executed on the Python-console and put them in a file “my_simple_exporter.py” with your favorite text-editor for python and save the file (we do the actual export-stuff later). The script should look something like this:

1
2
3
4
5
apple = bpy.data.objects["Apple"]
apple.select = True
bpy.ops.object.mode_set(mode='EDIT')
bpy.ops.mesh.select_all(action='SELECT')
bpy.ops.mesh.quads_convert_to_tris()

Lets open it up in Blender. For one of the views (I took the one from the Python Console) change it to “Text Editor” like this:

Switching to the Blender “Text Editor” view

Go to “Text -> Open Text Block” to open our script-file. Note that Blender only has a reference to that external file so that you can still do all your editing with your favorite text-editor. Blender will notice any changes (a small safety-ring is shown) and allow you to reload the file like this:

Blender will pick up on changes to the script and allow an easy reload

Let’s try to run the script via the “Run Script” button. You will see that Blender complains and nothing really happens. Our script failed. But why? Bad thing is that we cannot really see what Blender complained about. The Console.app should normally come to the rescue, but due to some bug/different behavior on MacOS the output is only written to the log-daemon after Blender is closed. At least that is how it is for me and I have read other people had the same problem. The way I worked around this is by opening up Blender always via the Terminal. The Terminal then directly will print our error.

Save the Blender file and close Blender. Open Blender again via the Terminal. For me, Blender is opened with this path:

/Applications/blender.app/Contents/MacOS/blender

Go into the Blender Text Editor, select my_simple_exporter.py and press “Run Script”. The following output should show up within the terminal:

Traceback (most recent call last): 
 File "/Users/daniel/Desktop/Blender/Session2/apple_for_logo.blend/my_simple_exporter.py", line 1, in <module> 
NameError: name 'bpy' is not defined

Well, if you are an advanced python developer, you might have seen this coming right from the beginning: We have forgotten to import the required modules. The Python console already imports this for us by default; if we create our own script-file, we have to do it ourselves. Add the following line to the beginning of you script-file:

import bpy

Try again. This time it should have worked fine.

The Mesh – Vertices, Faces, Normals

You might remember when we called apple.type, that Blender was telling us our model is of type “MESH”, but still all objects (model, camera and light) where of the Python-type “bpy_types.Object”. For sure all these thing should have different properties: A light should have properties like luminance, a camera has a direction it is facing to and a mesh should have vertices, edges, faces, normals, etc.

Blender hides this specifics behind the data-property. With it, you get access to all these type-specifics. Have a try and check the types of the data-property for camera, light and model:

>>> for k in bpy.context.scene.objects.keys(): print('%s has a data-property of type %s' % (k, type(bpy.context.scene.objects[k].data)))
... <Press Enter>
Point has a data-property of type <class 'bpy.types.PointLamp'>
Apple has a data-property of type <class 'bpy_types.Mesh'>
Camera has a data-property of type <class 'bpy.types.Camera'>

As we want to exporting our model, we are most-interested in the type “bpy_types.Mesh”. You can use the trick with auto-completion to see a full list, or check on the Python API reference we used before. The properties we are most interest in for now are “vertices” and “faces”.

  • Mesh.vertices gives us a list of vertices (bpy.types.MeshVertex) in the model. A MeshVertex has as property “co” that gives us access to its coordinate and a property “normal” for the normal.
  • Mesh.faces gives us a list of faces (bpy_types.MeshFace). Each face again has a property “vertices” that gives us a list of the indices into the Mesh.vertices-list. These indices define the face. After triangulating our model, we should only have three elements in each of the face’s vertices-lists.

Let’s try to list all faces with their vertex-coordindates:

>>> for (i,f) in enumerate(apple.data.faces): <Press Enter>
... for vertex_index in f.vertices: print('Face %d has has vertex %s' % (i, apple.data.vertices[vertex_index].co)) <Press Enter>
... <Press Enter>
Face 0 has has vertex Vector((-0.27777254581451416, 0.23545005917549133, 5.628979206085205))
Face 0 has has vertex Vector((-0.2777732014656067, -0.17912480235099792, 4.809228897094727))
Face 0 has has vertex Vector((-0.27777382731437683, 0.802850067615509, 4.097628593444824))
Face 1 has has vertex Vector((-0.2777732014656067, -0.17912480235099792, 4.809228897094727))
Face 1 has has vertex Vector((-0.27777382731437683, -0.233199805021286, 3.773179292678833))
Face 1 has has vertex Vector((-0.27777382731437683, 0.802850067615509, 4.097628593444824))
<and a lot more...>

Note that I have inserted a tab in the second line (firstline that starts with “…”) for this to work.

UPDATE: With Blender 2.6, the property “faces” has been renamed to “polygons”.

The first Export Script

With basic Python file-IO knowledge, we now have all the information we need to write a very basic model-exporter. Open up my_first_exporter.py within the Python Text Editor or your external Text-Editor and modify it so it looks as follows:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
import bpy
import os

# Change if file should be written some place else
file = open(os.path.expanduser("~/Desktop/mymodel.mdl"), "w")

model_object = None
model_mesh = None

# Search for the first object of type MESH
for obj in bpy.context.scene.objects:
    if obj.type == 'MESH':
        model_object = obj
        model_mesh = obj.data

# Triangulate
model_object.select = True
bpy.ops.object.mode_set(mode='EDIT')
bpy.ops.mesh.select_all(action='SELECT')
bpy.ops.mesh.quads_convert_to_tris()

for face in model_mesh.faces:
    for vertex_index in face.vertices:
        vertex = model_mesh.vertices[vertex_index]
        # Write each vertex in a seperate line with x,y,z seperated by a tab
        file.write(str(vertex.co[0]))
        file.write('\t')
        file.write(str(vertex.co[1]))
        file.write('\t')
        file.write(str(vertex.co[2]))
        file.write('\n')
There are basically three blocks of code:
  1. We find the first object in our Blender file that is of type “MESH”. This is a little bit more generic then the code we had before where we just used the model with name “Apple”.
  2. We triangulate our mesh; this should look familiar.
  3. We traverse all faces and vertex-indices to write each vertex of a face in a new line of the output file (x,y,z are separated by tabs). As we have triangulated our mesh, we know that always three lines in our exported model-file define one face.
Execute this file within the Python Text Editor and check that a file called “mymodel.mdl” has been created on our Desktop.

Loading the Model

So, what’s left is to actually import our model into our iOS Application and render it. As the model-format is quiet straight-forward and we assume basic OpenGL ES knowledge, I will only briefly describe the steps and let the code speak for itself.

Open up the Xcode project “GLKitAndBlender” from Part 1 and create a new Objective-C class “MyModel”.

1
2
3
4
5
6
7
8
9
#import <Foundation/Foundation.h>

@interface MyModel : NSObject

- (id)initWithFilename:(NSString *)filepath;
- (BOOL)load;
- (void)render;

@end

The interface consists of an initializer that takes the path to the model to load (mymodel.mdl), a load-method to actually read the file and a render-method to do the OpenGL-calls for rendering the model to the screen:

Here is the implementation of the class:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
#import "MyModel.h"

#import <GLKit/GLKit.h>

@interface MyModel () {
@private
    NSUInteger _num_vertices;
    GLfloat *_vertices;

    NSString *_filepath;
}
@end

@implementation MyModel

- (id)initWithFilename:(NSString *)filepath
{
    self = [super init];
    if (self) {
        _filepath = filepath;
    }

    return self;
}

- (BOOL)load
{
    NSString *file_content = [NSString stringWithContentsOfFile:_filepath encoding:NSUTF8StringEncoding error:nil];
    NSArray *coordinates = [file_content componentsSeparatedByCharactersInSet:[NSCharacterSet characterSetWithCharactersInString:@"\n\t"]];
    _num_vertices = [coordinates count] / 3;
    _vertices = malloc(sizeof(GLfloat) * 3 * _num_vertices);

    NSLog(@"Number of vertices to load: %d", _num_vertices);

    int i=0;
    for (NSString *coordinate in coordinates) {
        _vertices[i++] = atof([coordinate cStringUsingEncoding:NSUTF8StringEncoding]);
    }

    NSLog(@"Model loaded");

    return YES;
}

- (void)render
{
    static const float color[] = {
        0.8f, 0.8f, 0.8f, 1.0f
    };

    glVertexAttrib4fv(GLKVertexAttribColor, color);

    glEnableVertexAttribArray(GLKVertexAttribPosition);
    glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 0, _vertices);

    glDrawArrays(GL_TRIANGLES, 0, _num_vertices);

    glDisableVertexAttribArray(GLKVertexAttribPosition);
}

- (void)dealloc
{
    free(_vertices);
}

@end
The load-method  reads each coordinate seperatly and converts it from an NSString to a GLfloat and stuffs it into the GLfloat-array under the name _vertices. The render-method then draws our model in the same way it already did for the swinging square from Part 1. We only disabled the color-attribute as we draw everything with the same grayish color.
What’s left is to actually use the MyModel-class. For this, first import the “mymodel.mdl” file into the Xcode project. Once you have done this add an instance variable “model” to the GLKitAndBlenderViewController-class header. The changes to the GLKitAndBlenderViewController-class implementation-file contain comments and can be seen below:
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
#import "GLKitAndBlenderViewController.h"

#import "MyModel.h"

@implementation GLKitAndBlenderViewController

- (void)viewDidLoad
{
    [super viewDidLoad];

    EAGLContext *aContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];

    GLKView *glkView = (GLKView *)self.view;
    glkView.delegate = self;
    glkView.context = aContext;

    glkView.drawableColorFormat = GLKViewDrawableColorFormatRGBA8888;
    glkView.drawableDepthFormat = GLKViewDrawableDepthFormat16;
    glkView.drawableMultisample = GLKViewDrawableMultisample4X;

    self.delegate = self;
    self.preferredFramesPerSecond = 30;

    effect = [[GLKBaseEffect alloc] init];

    // Load the model
    model = [[MyModel alloc] initWithFilename:[[NSBundle mainBundle] pathForResource:@"mymodel" ofType:@"mdl"]];
    [model load];

    glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
    glEnable(GL_DEPTH_TEST);
}

#pragma mark GLKViewControllerDelegate

- (void)glkViewControllerUpdate:(GLKViewController *)controller
{
    static float transY = 0.0f;
    transY += 0.175f;

    static float deg = 0.0;
    deg += 0.1;
    if (deg >= 2*M_PI) {
        deg-=2*M_PI;
    }

    static GLKMatrix4 modelview;
    modelview = GLKMatrix4Translate(GLKMatrix4Identity, 0, 0, -25.0f);
    modelview = GLKMatrix4Rotate(modelview, deg, 0.0f, 1.0f, 0.0f);

    // Correction for loaded model because in blender z-axis is facing upwards
    modelview = GLKMatrix4Rotate(modelview, -M_PI/2.0f, 0.0f, 1.0f, 0.0f);
    modelview = GLKMatrix4Rotate(modelview, -M_PI/2.0f, 1.0f, 0.0f, 0.0f);

    effect.transform.modelviewMatrix = modelview;

    static GLKMatrix4 projection;
    GLfloat ratio = self.view.bounds.size.width/self.view.bounds.size.height;
    projection = GLKMatrix4MakePerspective(45.0f, ratio, 0.1f, 100.0f);
    effect.transform.projectionMatrix = projection;}

#pragma mark GLKViewDelegate

- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
    [effect prepareToDraw];

    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

    // Render the model
    [model render];
}

@end
  1. We load the model within our viewDidLoad:-method
  2. We make some adjustments to the model-view-matrix because in contrast to OpenGL, in Blender the z-axis is facing upwards. If we would not do this, the model would be render in the wrong orientation. (We also changed the model-view matrix so the rendered apple-logo is nicely spinning)
  3. Within the glkView:drawInRect:-method we call the render-method of our model.
You can get the full Xcode project here.
If you run the application in the simulator, you should see a rotating apple-logo. The 3D-effect is not so powerful as we did not introduce lightning yet and we used the same color for all faces; but still: Well done!

The final result; imagine the logo is rotating 😉

What’s next?

I wanted to end this second part of the series with something that already shows us some nice result based on what we have learned. Thus, we have created a very simple exporter for our model, but we have missed a lot of points we have to tackel in the next part of the series:

  • When we do the triangulation of our model, we do it on the model that is stored in the Blender file itself. What we want in the future is actually to make a copy of our model so we can work on this. We might later on want to enhance/remodel and if we have triangulated the model that might make it harder. We could obviously make sure to not save the Blender model after executing the script or always press Undo, but there are better ways.
  • The export-script we have written is no real Blender Export script that appears under File -> Export. We should change this so we can also offer the user a dialog where to actual store our exported model and under which name. After all, we might work with a designer that does not want to modify python-code to change the location of a saved file.
  • In Blender the z-axis is pointing in the upwards direction. This is different to OpenGL where the y-axis is the one facing upwards. If we don’t want to correct the orientation with the model-view-matrix in our OpenGL code, we have to enhance the export-script to do this conversion already.
  • Our model-format at the moment is very inefficient as we have a lot of duplicate vertices in our model-file (adjacent faces have common vertices). Second, exporting to a binary format format spares us the string-to-float conversion and would be much faster for loading the model. Third, the final result in the iOS simulator does not look very 3Dish yet. This is due to the missing normals for the the lightning calculations. Also, textures/materials for the faces are missing.
You see, we have a lot to enhance here. Stay tuned for the next part of the series where we will tackle all of these points (except materials/textues; that’s for an own part).

Update:

I accidentally checked in the apple_logo.blend-file while in EDIT-mode. Unfortunately, the Object.data is never updated, until this mode is left again. So, even though, the model looks triangulated in Blender, the underlying data isn’t and the export-script will not export triangles. I updated the file in github, but you can just go back to OBJECT-mode and try the export again if you already download the files before. Thanks to Johnson Tsay for noticing this!

Written by 38leinad

November 2, 2011 at 9:54 am