fruitfly

a technical blog

Archive for the ‘blender’ Category

Blender: Exporting Keyframe-based Animations

with 2 comments

Just got asked how to export animations from within Blender. While collecting the essential snippets, I though, I might as well put it in a short post in case it can help somebody else. Note that this post assumes that you are familiar with Blender’s Python API and thus only gives the most essential information to get you started. I.e. what are the main data-paths to get animation data. More specifically, how to export the transformed vertices for every frame of the animation.

Let’s assume you have defined an animation on some object in blender. E.g. by manually setting keyframes on some object or by recording some physics-based animation with the Blender Game Engine. Let’s also assume the object is currently selected and active. The essential steps to access the transformed vertices are as follows:

# Get the active object
o = bpy.context.active_object
# Get index of the first and last frame of the animation
(start_frame, end_frame) = o.animation_data.action.frame_range
# Iterate over each frame
for f in range(int(start_frame), int(end_frame) + 1):
  # Set the current frame of the scene
  bpy.context.scene.frame_set(f)
  # Iterate over vertices
  for v in o.data.vertices:
    # Transform each vertex with the current world-matrix
    print(o.matrix_world * v.co)

Note: the world-matrix of the object contains the current rotation, scaling and translation transforms due to the active frame of the animation sequence.

Obviously, this script is only to show how to get to the relevant information. In a real exporter, you might not iterator over the vertices directly, but over the faces; but I assume, you know these things and only the animation-related part is new to you.

Just one final note: If you have a lot of frames, the amount of vertex data in your final model file might get very large as you export O(numVertices * numFrames) many elements. And often animations, especially skeletal-based ones, can be exported more efficiently by only exporting the transforms to be applied to each part of the model (think torso, upper leg, lower leg, tows…). The transforms can then be nicely applied  and interpolated in code with glTransform, glRotate, glScale and friends (yes, deprecated, but good to make the point) assuming you are using OpenGL for rendering.

Written by 38leinad

June 4, 2012 at 7:43 pm

Posted in blender

Blender 2.6: Exporting UV Texture Coordinates

with 2 comments

There is nothing I admire more then a software development team that is not shy of reorganizing an API to the better even though it means stepping on some peoples feet. Well, I guess the Blender team stepped on my foot; because one of my export scripts was no longer working once I upgraded to Blender 2.6. Actually, I should have known; because it is more the rule then the exception that the Python API changes from release to release…

This time it took me some hours to find how to get to the UV texture coordinates of a mesh. After browsing through the API via the python console for some time I took the easy way out and peaked into the .OBJ model exporter coming with blender. So, here is how you get to the UV textures of a triangle for your mesh:

m = bpy.context.active_object.to_mesh(bpy.context.scene, True, 'PREVIEW')
uv1 = m.tessface_uv_textures.active.data[0].uv1
uv2 = m.tessface_uv_textures.active.data[0].uv2
uv3 = m.tessface_uv_textures.active.data[0].uv3

The part of the data path that has changed (at least I was able to get to the UV coordindates differently in the past) is the tessface_uv_textures property.

Note that i am accessing index [0] here. The array indices correspond to the indices of the faces-array. So, this example is for face zero. Also note that the uv1 to uvx depend on how many vertices the face has. Often you would first triangulate the mesh to have only primitive polygons.

I hope t helps someone because also google mostly shows up with results for accessing the UV coordinates that are deprecated for some time by now and thus are no longer working.

Written by 38leinad

May 29, 2012 at 8:51 pm

Posted in blender

Tagged with

Practical Blender with GLKit – Part 2 – Blender Scripting with Python

with 12 comments

Welcome to the second part of this practical series on Blender for the use with GLKit-based iOS-applications. The first part focused on setting up a basic GLKit Xcode-project as the prerequisite for the following parts. As it is a quiet new API, I think it was worth it. I also tried to give some links and tipps for learning the basics of Blender modeling.

Within this part, we will directly jump into the Python API of Blender. So, for this part I assumed you already have tinkered with Blender, done some basic modeling and find your way around the UI.  I will also assume you have some basic knowledge of the Python scripting language. If not, I recommend googling for “python in 10 minutes” . We will only be using the most basic features of the language.

In contrast to my initial plans, this is a rather lengthy post as we have to introduce some basics, will go the full way to already write a very simple export-functionality from Blender  and also write corresponding import-functionality for our iOS application, respectively. My feeling while writing was that it wraps things up much more nicely if we actually have some kind of result in our iOS application to show of at the end of this part.

Introduction

Depending on how close a look at Blender you had already, you might actually know that it comes with its own python runtime in the application bundle. The whole application is written in python (correct me if I am wrong) including all operation and actions that you can trigger via the user-interface. If you have a look into the application-bundle, you will find the subfolder /Applications/blender.app/Contents/MacOS/2.59/scripts/ containing lots of python script-files. “addons” contains a lot of export scripts for different kind of model formats. This is a very rich stash of code snippets you should be able to understand after this post.

When it comes to writing and executing your own Python scripts and tinker with the API you basically have three different options:

  1. Start Blender with a Python script for execution.
  2. Write a python script within the integrated Text Editor of Blender and execute it with the push of a button.
  3. Use the interactive Python command-line to write and execute a script line-by-line.
Up until now I have never used approach 1 because I am still in the learning phase. This makes sense for a bulk operation on many blender-model files though, I assume. Approach 2 we will use later (You can still use an external editor to write the script) to execute a whole script of operations on our Blender model. Approach 3 is really the one that makes exploring the API very easy. You can step-by-step explore operations, see the changes to your model and copy the lines into a full script-file if the result was as expected.

Python Console

Lets have a close look at approach 3 and explore data-structures and operations via the powerful Python Console. Open up this Blender model of the apple-logo. If you still have the default interface-layout, select the button in the lower-left corner of the window that should show the animation time-line. Change it to the Python Console like in the following picture.

Change one of the subviews to the interactive “Python Console”

This is the interactive Python Console/REPL; just like the regular Python Console. Read the blue output: Pressing Up/Down to go back and forth in the command-history is not a so special feature, but Auto-completion is really the killer-feature to quickly discover the API. Have a try and type:
>>> bpy.
and hit Ctrl-Space. You should see a list of possible sub-modules you can expand the command with. Type
>>> bpy.context.
and hit Ctrl-Space again. You see a list of all methods and properties you can call on the context. Make it
>>> bpy.context.scene.objects
bpy.data.scenes["Scene"].objects 
and press Enter. This prints the list-object containing all objects in the scene. Unfortunately, it is no native list so it does not actually list its contents. Go one back in the command history to display the command again and enclose it in a list-call: 
>>> list(bpy.context.scene.objects)
[bpy.data.objects["Point"], bpy.data.objects["Apple"], bpy.data.objects["Camera"]]
If you execute this, you should see a list of the objects in the scene. For me, the scene consists of three objects named “Point”, “Apple” and “Camera”. The first is a light-source, the second our apple-logo model and the last is the camera object in the scene.
But what if you don’t know this ahead and need to find out in one of your scripts what type an object is (for a simple model exporter that we will write later, we are only interest in exporting the actual model)? Well, we can try to ask for the type. Let’s do it the Python-way:
>>> for o in bpy.context.scene.objects: type(o) 
... <Press Enter>
<class 'bpy_types.Object'>
<class 'bpy_types.Object'>
<class 'bpy_types.Object'>

Ok, so obviously, many objects are represented by the type bpy_types.Object in Blender. But there is also a property “type” on bpy_type.Object (which you can find easily with the auto-completion feature). Lets try it:

>>> for o in bpy.context.scene.objects: o.type
... 
'LAMP'
'MESH'
'CAMERA'

So, if we are interest in the camera, we can simply check for the type-property of the object. Actually, we do not bother so much for the camera (and the light), but we want to work with the apple-logo which is of type “MESH”; basically representing a set of polygons. Assign it to a variable by executing this:

>>> apple = bpy.data.objects["Apple"]

We will use this variable soon when we are going to explore what we can do with our model. But first, let’s get some general introduction to the different modules of the API.

Common Modules

“bpy” is the main module of the python API. It contains most other blender-related sub-modules to access and interact with almost everything you can also do with a press of a button or other interface element in the UI. The most important modules are:

The context – bpy.context

This basically gives you access to the current state of the blender UI and is necessary to understand as we will use our scripts like Macros you might know from other application. If we switch a mode, we will actually see a change in the Blender UI; this is represented in the context.

  • bpy.context.mode: What is the current mode of Blender? When you work with your models and are a beginner like me, you are mostly either in Object- or Edit-mode. So, if you access this property, you should get “OBJECT” or “EDIT” as return-value.
  • bpy.context.active_object: What object is currently active? To be truthful, I haven’t found much use for this property yet, as I mostly only have one object selected at any time; in that case, the active-object is also the selected object. Which you can query with the next property.
  • bpy.context.selected_objects: Which objects are currently selected? Select/deselect the apple-logo or the camera and see as the property changes.
  • bpy.context.scene: We already played with this before. It gives you access to all parts of the scene. So, whatever you see in the Outliner-view, you can access it from here.

The Outliner-view

The data – bpy.data

This submodule gives you access to all data that is part of the currently opened Blender-file (.blend). This might be a super-set of what is available through the context as you might create objects programmatically that do not show up in the scene. If you try it for now, you should actually get the same list of objects as in the scene:

>>> list(bpy.data.objects)
[bpy.data.objects["Apple"], bpy.data.objects["Camera"], bpy.data.objects["Point"]]

When we start to make copies of objects to work on, this list will temporarily contain more objects.

The operations – bpy.ops

If you have been playing around yourself with the above introduced property “bpy.context.mode” you might have noticed that it is not possible to set it. It is read-only and changing it requires to call a corresponding operation from the “bpy.ops” module. In specific, the operations are grouped within submodules again. To operate on an object (bpy_types.Object), check for “bpy.ops.object”, for operations on a mesh check “bpy.ops.mesh”. We will later see how easy it is to find the Python API call for a button/element in the Blender UI. But, let’s play around with some basic operations first: 

>>> bpy.ops.object.mode_set(mode='EDIT')
{'FINISHED'}
>>> bpy.ops.object.mode_set(mode='OBJECT')
{'FINISHED'}

If you were in Object-mode before executing these commands, you should have seen that the Blender UI switched to Edit-mode and with the second command back to Object-mode. This is an important command, because you only can perform certain operations in Edit-mode, just like you are only able to do certain things in the UI when you are in a specific mode. It also depends on the object currently selected what modes are available. Select the camera-object via the UI (right mouse-click) and see that the call which worked before now fails:

>>> bpy.ops.object.mode_set(mode='EDIT')
Traceback (most recent call last):
 File "<blender_console>", line 1, in <module>
 File "/Applications/blender.app/Contents/MacOS/2.59/scripts/modules/bpy/ops.py", line 179, in __call__
 ret = op_call(self.idname_py(), None, kw)
TypeError: Converting py args to operator properties: enum "EDIT" not found in ('OBJECT')

Scripting Workflow

We will see that whole experience of Python scripting in Blender is very beginner-friendly; and if you follow a specific workflow you can very easily develop rather complex scripts in no time.

I will show you this workflow based on a simple example. We now want to do one very common operation that we usually will need to export a model for  use with OpenGL ES: Triangulation (OpenGL ES only allows us to draw triangles and not quads like in standard OpenGL). It is a concret example but might also help you in the future for a lot other problems:

  1. Do your steps manually: Find out what you want to do with the help of the UI. I.e. Select the apple-logo model, switch into Edit-mode, select all vertices (Select -> Deselect/Select All) and triangulate via Mesh -> Faces -> Quads to Tris. This should triangulate our model (Actually, most part of the apple is already triangulated; only the outer rim is quads). If that worked, undo the steps. If you know you can do it in the UI, you for sure can do it also easily via the Python API.

    Select all vertices with Select -> Select/Deselect All

    Triangulate via Mesh -> Faces -> Quads to Tris

  2. Do it step-by-step in the Python-console: The manual steps worked fine, so let’s try it step by step in the Python console again. Some steps we have actually already done before:
    >>> apple.select = True
    >>> bpy.ops.object.mode_set(mode='EDIT')
    {'FINISHED'}
    But now: How to select all vertices and how to call the "Quads to Tris" operation? Well, when you did the steps manually, you might have seen something in Blender that is quiet neat. While hovering over an operation in the menu, you see the Python API call that lies behind it (see the last two screenshots). For "Select/Deselect All", bpy.ops.mesh.select_all is shown.
    >>> bpy.ops.mesh.select_all( <Press Ctrl-Space>
     select_all()
    bpy.ops.mesh.select_all(action='TOGGLE')

    You see, that I first pressed Ctrl-Space after the function-name so I see the doc-string and thus know which parameters are required for the function. I noticed that it says “action=’TOGGLE'” in the example. That sounds a little error-prone, because it might actually deselect everything if we have already some vertices selected before this operation (for whatever reason). Let’s maybe look this one up in the API reference to make sure we do not deselect but select all: Go to Help -> API Reference in the Info-view. A browser should open up with the documentation. Click on Operators (bpy.types) and further click on Mesh Operators on the next page. We see that ‘SELECT’ is the action that we want.

    >>> bpy.ops.mesh.select_all(action='SELECT')
    {'FINISHED'}

    For our last step, the actual triangulation, we can also see from the menu (see screenshot above) that the Python-call is bpy.ops.mesh.quads_to_tris:

    >>> bpy.ops.mesh.quads_convert_to_tris()
    {'FINISHED'}
    We did it. And it was not so hard because of this unbelieavably neat feature of giving us hints on the Python API directly within the UI. This works for many different things. E.g. if you are in Object-mode and hover over one of the XYZ-sliders in the Transform/Location-section it shows "Object.location". So, we directly know to which property of the Object-type it maps to: For our model, it is apple.location (actually there is something more to it to actually apply the transform to our underlying mesh, but we will see this in the next part of the series).

    Hovering over interface elements will give you a hint on the underlying Python API call

    I just cannot stop pointing out how neat this feature is 🙂

    Now, we just have to package it up into one Python script which we can execute as a whole; which is the next step.

  3. Create a script of the individual steps/commands from step 2. Basically, you can paste the individual commands from step 2 into a text-file line-by-line to get a very static script. Enrich it with some variable/if-then-else-cases and you are done.

You should have gotten the general idea of the three-step approach. Step 3 was only theory, though. We will now see how it can be done in practice with the Python Text Editor.

Python Text Editor

Take all the steps we have executed on the Python-console and put them in a file “my_simple_exporter.py” with your favorite text-editor for python and save the file (we do the actual export-stuff later). The script should look something like this:

1
2
3
4
5
apple = bpy.data.objects["Apple"]
apple.select = True
bpy.ops.object.mode_set(mode='EDIT')
bpy.ops.mesh.select_all(action='SELECT')
bpy.ops.mesh.quads_convert_to_tris()

Lets open it up in Blender. For one of the views (I took the one from the Python Console) change it to “Text Editor” like this:

Switching to the Blender “Text Editor” view

Go to “Text -> Open Text Block” to open our script-file. Note that Blender only has a reference to that external file so that you can still do all your editing with your favorite text-editor. Blender will notice any changes (a small safety-ring is shown) and allow you to reload the file like this:

Blender will pick up on changes to the script and allow an easy reload

Let’s try to run the script via the “Run Script” button. You will see that Blender complains and nothing really happens. Our script failed. But why? Bad thing is that we cannot really see what Blender complained about. The Console.app should normally come to the rescue, but due to some bug/different behavior on MacOS the output is only written to the log-daemon after Blender is closed. At least that is how it is for me and I have read other people had the same problem. The way I worked around this is by opening up Blender always via the Terminal. The Terminal then directly will print our error.

Save the Blender file and close Blender. Open Blender again via the Terminal. For me, Blender is opened with this path:

/Applications/blender.app/Contents/MacOS/blender

Go into the Blender Text Editor, select my_simple_exporter.py and press “Run Script”. The following output should show up within the terminal:

Traceback (most recent call last): 
 File "/Users/daniel/Desktop/Blender/Session2/apple_for_logo.blend/my_simple_exporter.py", line 1, in <module> 
NameError: name 'bpy' is not defined

Well, if you are an advanced python developer, you might have seen this coming right from the beginning: We have forgotten to import the required modules. The Python console already imports this for us by default; if we create our own script-file, we have to do it ourselves. Add the following line to the beginning of you script-file:

import bpy

Try again. This time it should have worked fine.

The Mesh – Vertices, Faces, Normals

You might remember when we called apple.type, that Blender was telling us our model is of type “MESH”, but still all objects (model, camera and light) where of the Python-type “bpy_types.Object”. For sure all these thing should have different properties: A light should have properties like luminance, a camera has a direction it is facing to and a mesh should have vertices, edges, faces, normals, etc.

Blender hides this specifics behind the data-property. With it, you get access to all these type-specifics. Have a try and check the types of the data-property for camera, light and model:

>>> for k in bpy.context.scene.objects.keys(): print('%s has a data-property of type %s' % (k, type(bpy.context.scene.objects[k].data)))
... <Press Enter>
Point has a data-property of type <class 'bpy.types.PointLamp'>
Apple has a data-property of type <class 'bpy_types.Mesh'>
Camera has a data-property of type <class 'bpy.types.Camera'>

As we want to exporting our model, we are most-interested in the type “bpy_types.Mesh”. You can use the trick with auto-completion to see a full list, or check on the Python API reference we used before. The properties we are most interest in for now are “vertices” and “faces”.

  • Mesh.vertices gives us a list of vertices (bpy.types.MeshVertex) in the model. A MeshVertex has as property “co” that gives us access to its coordinate and a property “normal” for the normal.
  • Mesh.faces gives us a list of faces (bpy_types.MeshFace). Each face again has a property “vertices” that gives us a list of the indices into the Mesh.vertices-list. These indices define the face. After triangulating our model, we should only have three elements in each of the face’s vertices-lists.

Let’s try to list all faces with their vertex-coordindates:

>>> for (i,f) in enumerate(apple.data.faces): <Press Enter>
... for vertex_index in f.vertices: print('Face %d has has vertex %s' % (i, apple.data.vertices[vertex_index].co)) <Press Enter>
... <Press Enter>
Face 0 has has vertex Vector((-0.27777254581451416, 0.23545005917549133, 5.628979206085205))
Face 0 has has vertex Vector((-0.2777732014656067, -0.17912480235099792, 4.809228897094727))
Face 0 has has vertex Vector((-0.27777382731437683, 0.802850067615509, 4.097628593444824))
Face 1 has has vertex Vector((-0.2777732014656067, -0.17912480235099792, 4.809228897094727))
Face 1 has has vertex Vector((-0.27777382731437683, -0.233199805021286, 3.773179292678833))
Face 1 has has vertex Vector((-0.27777382731437683, 0.802850067615509, 4.097628593444824))
<and a lot more...>

Note that I have inserted a tab in the second line (firstline that starts with “…”) for this to work.

UPDATE: With Blender 2.6, the property “faces” has been renamed to “polygons”.

The first Export Script

With basic Python file-IO knowledge, we now have all the information we need to write a very basic model-exporter. Open up my_first_exporter.py within the Python Text Editor or your external Text-Editor and modify it so it looks as follows:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
import bpy
import os

# Change if file should be written some place else
file = open(os.path.expanduser("~/Desktop/mymodel.mdl"), "w")

model_object = None
model_mesh = None

# Search for the first object of type MESH
for obj in bpy.context.scene.objects:
    if obj.type == 'MESH':
        model_object = obj
        model_mesh = obj.data

# Triangulate
model_object.select = True
bpy.ops.object.mode_set(mode='EDIT')
bpy.ops.mesh.select_all(action='SELECT')
bpy.ops.mesh.quads_convert_to_tris()

for face in model_mesh.faces:
    for vertex_index in face.vertices:
        vertex = model_mesh.vertices[vertex_index]
        # Write each vertex in a seperate line with x,y,z seperated by a tab
        file.write(str(vertex.co[0]))
        file.write('\t')
        file.write(str(vertex.co[1]))
        file.write('\t')
        file.write(str(vertex.co[2]))
        file.write('\n')
There are basically three blocks of code:
  1. We find the first object in our Blender file that is of type “MESH”. This is a little bit more generic then the code we had before where we just used the model with name “Apple”.
  2. We triangulate our mesh; this should look familiar.
  3. We traverse all faces and vertex-indices to write each vertex of a face in a new line of the output file (x,y,z are separated by tabs). As we have triangulated our mesh, we know that always three lines in our exported model-file define one face.
Execute this file within the Python Text Editor and check that a file called “mymodel.mdl” has been created on our Desktop.

Loading the Model

So, what’s left is to actually import our model into our iOS Application and render it. As the model-format is quiet straight-forward and we assume basic OpenGL ES knowledge, I will only briefly describe the steps and let the code speak for itself.

Open up the Xcode project “GLKitAndBlender” from Part 1 and create a new Objective-C class “MyModel”.

1
2
3
4
5
6
7
8
9
#import <Foundation/Foundation.h>

@interface MyModel : NSObject

- (id)initWithFilename:(NSString *)filepath;
- (BOOL)load;
- (void)render;

@end

The interface consists of an initializer that takes the path to the model to load (mymodel.mdl), a load-method to actually read the file and a render-method to do the OpenGL-calls for rendering the model to the screen:

Here is the implementation of the class:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
#import "MyModel.h"

#import <GLKit/GLKit.h>

@interface MyModel () {
@private
    NSUInteger _num_vertices;
    GLfloat *_vertices;

    NSString *_filepath;
}
@end

@implementation MyModel

- (id)initWithFilename:(NSString *)filepath
{
    self = [super init];
    if (self) {
        _filepath = filepath;
    }

    return self;
}

- (BOOL)load
{
    NSString *file_content = [NSString stringWithContentsOfFile:_filepath encoding:NSUTF8StringEncoding error:nil];
    NSArray *coordinates = [file_content componentsSeparatedByCharactersInSet:[NSCharacterSet characterSetWithCharactersInString:@"\n\t"]];
    _num_vertices = [coordinates count] / 3;
    _vertices = malloc(sizeof(GLfloat) * 3 * _num_vertices);

    NSLog(@"Number of vertices to load: %d", _num_vertices);

    int i=0;
    for (NSString *coordinate in coordinates) {
        _vertices[i++] = atof([coordinate cStringUsingEncoding:NSUTF8StringEncoding]);
    }

    NSLog(@"Model loaded");

    return YES;
}

- (void)render
{
    static const float color[] = {
        0.8f, 0.8f, 0.8f, 1.0f
    };

    glVertexAttrib4fv(GLKVertexAttribColor, color);

    glEnableVertexAttribArray(GLKVertexAttribPosition);
    glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 0, _vertices);

    glDrawArrays(GL_TRIANGLES, 0, _num_vertices);

    glDisableVertexAttribArray(GLKVertexAttribPosition);
}

- (void)dealloc
{
    free(_vertices);
}

@end
The load-method  reads each coordinate seperatly and converts it from an NSString to a GLfloat and stuffs it into the GLfloat-array under the name _vertices. The render-method then draws our model in the same way it already did for the swinging square from Part 1. We only disabled the color-attribute as we draw everything with the same grayish color.
What’s left is to actually use the MyModel-class. For this, first import the “mymodel.mdl” file into the Xcode project. Once you have done this add an instance variable “model” to the GLKitAndBlenderViewController-class header. The changes to the GLKitAndBlenderViewController-class implementation-file contain comments and can be seen below:
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
#import "GLKitAndBlenderViewController.h"

#import "MyModel.h"

@implementation GLKitAndBlenderViewController

- (void)viewDidLoad
{
    [super viewDidLoad];

    EAGLContext *aContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];

    GLKView *glkView = (GLKView *)self.view;
    glkView.delegate = self;
    glkView.context = aContext;

    glkView.drawableColorFormat = GLKViewDrawableColorFormatRGBA8888;
    glkView.drawableDepthFormat = GLKViewDrawableDepthFormat16;
    glkView.drawableMultisample = GLKViewDrawableMultisample4X;

    self.delegate = self;
    self.preferredFramesPerSecond = 30;

    effect = [[GLKBaseEffect alloc] init];

    // Load the model
    model = [[MyModel alloc] initWithFilename:[[NSBundle mainBundle] pathForResource:@"mymodel" ofType:@"mdl"]];
    [model load];

    glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
    glEnable(GL_DEPTH_TEST);
}

#pragma mark GLKViewControllerDelegate

- (void)glkViewControllerUpdate:(GLKViewController *)controller
{
    static float transY = 0.0f;
    transY += 0.175f;

    static float deg = 0.0;
    deg += 0.1;
    if (deg >= 2*M_PI) {
        deg-=2*M_PI;
    }

    static GLKMatrix4 modelview;
    modelview = GLKMatrix4Translate(GLKMatrix4Identity, 0, 0, -25.0f);
    modelview = GLKMatrix4Rotate(modelview, deg, 0.0f, 1.0f, 0.0f);

    // Correction for loaded model because in blender z-axis is facing upwards
    modelview = GLKMatrix4Rotate(modelview, -M_PI/2.0f, 0.0f, 1.0f, 0.0f);
    modelview = GLKMatrix4Rotate(modelview, -M_PI/2.0f, 1.0f, 0.0f, 0.0f);

    effect.transform.modelviewMatrix = modelview;

    static GLKMatrix4 projection;
    GLfloat ratio = self.view.bounds.size.width/self.view.bounds.size.height;
    projection = GLKMatrix4MakePerspective(45.0f, ratio, 0.1f, 100.0f);
    effect.transform.projectionMatrix = projection;}

#pragma mark GLKViewDelegate

- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
    [effect prepareToDraw];

    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

    // Render the model
    [model render];
}

@end
  1. We load the model within our viewDidLoad:-method
  2. We make some adjustments to the model-view-matrix because in contrast to OpenGL, in Blender the z-axis is facing upwards. If we would not do this, the model would be render in the wrong orientation. (We also changed the model-view matrix so the rendered apple-logo is nicely spinning)
  3. Within the glkView:drawInRect:-method we call the render-method of our model.
You can get the full Xcode project here.
If you run the application in the simulator, you should see a rotating apple-logo. The 3D-effect is not so powerful as we did not introduce lightning yet and we used the same color for all faces; but still: Well done!

The final result; imagine the logo is rotating 😉

What’s next?

I wanted to end this second part of the series with something that already shows us some nice result based on what we have learned. Thus, we have created a very simple exporter for our model, but we have missed a lot of points we have to tackel in the next part of the series:

  • When we do the triangulation of our model, we do it on the model that is stored in the Blender file itself. What we want in the future is actually to make a copy of our model so we can work on this. We might later on want to enhance/remodel and if we have triangulated the model that might make it harder. We could obviously make sure to not save the Blender model after executing the script or always press Undo, but there are better ways.
  • The export-script we have written is no real Blender Export script that appears under File -> Export. We should change this so we can also offer the user a dialog where to actual store our exported model and under which name. After all, we might work with a designer that does not want to modify python-code to change the location of a saved file.
  • In Blender the z-axis is pointing in the upwards direction. This is different to OpenGL where the y-axis is the one facing upwards. If we don’t want to correct the orientation with the model-view-matrix in our OpenGL code, we have to enhance the export-script to do this conversion already.
  • Our model-format at the moment is very inefficient as we have a lot of duplicate vertices in our model-file (adjacent faces have common vertices). Second, exporting to a binary format format spares us the string-to-float conversion and would be much faster for loading the model. Third, the final result in the iOS simulator does not look very 3Dish yet. This is due to the missing normals for the the lightning calculations. Also, textures/materials for the faces are missing.
You see, we have a lot to enhance here. Stay tuned for the next part of the series where we will tackle all of these points (except materials/textues; that’s for an own part).

Update:

I accidentally checked in the apple_logo.blend-file while in EDIT-mode. Unfortunately, the Object.data is never updated, until this mode is left again. So, even though, the model looks triangulated in Blender, the underlying data isn’t and the export-script will not export triangles. I updated the file in github, but you can just go back to OBJECT-mode and try the export again if you already download the files before. Thanks to Johnson Tsay for noticing this!

Written by 38leinad

November 2, 2011 at 9:54 am

Practical Blender with GLKit – Part 1 – Introducing GLKit

with 20 comments

This is my first post for idevblogaday.com and I am quiet excited to be part of this community. The main reason to do this is so I do more constant iOS development. Hopefully I can keep up with the bi-weekly schedule.

With this post, I would like to start a series about the use of Blender for iOS game development. I will additionally base this series on the new iOS 5 framework GLKit. I will try to keep it rather practical and  assume knowledge of OpenGL ES and Blender basics. If that is not the case for you, I will give reference to some literature for self-learning:

The general idea for the different parts of the series is at the moment as follows:

  1. Introduction: Basically set up a GLKit-based project from ground. This will be rather a beginners-type topic to get started. As GLKit is a just newly introduced API in iOS 5 I think that it makes sense to spend a whole post on it.
  2. Blender and Python: I will try to introduce the powerful python scripting engine underneath Blender. How to write own scripts that access the modeled object and modify them.
  3. Blender Exporting: We will use Blender’s python scripting engine to write our own export-function to a custom 3D model format and write the corresponding model-loader for our GLKit project of Part 1.
  4. Blender Animations: We will extend our model to contain skeletal animation and add the necessary adjustments to our Blender export-script and our GLKit project.
  5. We will see what I can come up with. I am not this far yet.
Please note that this series is supposed to be rather practical on how to use GLKit and in specific Blender for game-development. So, I will assume knowledge on OpenGL and some self-teaching on Blender and Python (whatever basics you are missing). I will not try to write just another introduction to OpenGL and Blender as there are already plenty good out there that I will hint to.

OpenGL ES Links

If you have followed this blog before, you might know that I expect you to read some external resource on your own to get the necessary concepts if you don’t know them yet. So here, some external literature for OpenGL ES, if you need to refresh your knowledge:

Introduction

So what is actually GLKit? If you have been working with OpenGL ES 1 and its fixed-function pipeline a lot and then at some point in time tried to switch to OpenGL ES 2 with its shader-based freely programmable model (because Apple said this is the way to go), you know that this transition is not easy at first.
OpenGL ES 2 gives you great flexibility if you are an advanced developer in this area (unlike me) but you feel overwehlmed when you first see the Xcode template for it:
  • You have to learn the shader language
  • You have to write the code for loading, compiling and linking the shaders yourself (well, it is in the template; but anyway)
  • You have to set up the different buffers (pixelbuffer, depthbuffer, stencilbuffer, …) yourself (again; in the template, but it just leaves a bad feel in your stomach)
  • You loose a lot of the beloved APIs for manipulating the modelview and projection matrices.
The overhead when you start a new project is just quiet high. GLKit tries to tackel this and actually does this quiet well by replicating the fixed-function pipeline of OpenGL ES 1; so you can choose to do some basic rendering with the capabilities of the fixed-function pipeline and only resort to OpenGL ES 2 features for some advanced effects. Lets have a look.

Project Setup

Open up Xcode 4 and create a “Single View Application” named “GLKitAndBlender. I made it a “storyboard-based” application and Automatic-Reference-Counted  as it is the new hip thing to do; but note that we will not really need anything related to the storyboard as we only have one scene worth of information (our OpenGL view).
As we will use OpenGL ES and the new GLKit framework, add them both to the linked libraries in the Build Phase pane.

Link the project with GLKit and OpenGLES

Next, go to the autogenerated GLKitAndBlenderViewController.h header-file and give it the protocols, methods, and member as you see below:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
#import <UIKit/UIKit.h>
#import <GLKit/GLKit.h>

@interface GLKitAndBlenderViewController : GLKViewController <GLKViewControllerDelegate, GLKViewDelegate> {
@private
    GLKBaseEffect *effect;
}

#pragma mark GLKViewControllerDelegate
- (void)glkViewControllerUpdate:(GLKViewController *)controller;

#pragma mark GLKViewDelegate
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect;

@end

We do four main things here:

  1. Define this controller as a subclass of GLKViewController. This controller plus the GLKView we will define in the next step in interface-builder save a lot of work in regard of automatically setting up a render-loop and managing the framebuffer.
  2. We make this controller its own delegate by implementing the GLKViewControllerDelegate protocol. This protocol defines the method glkViewControllerUpdate: that is called each time before a new frame will be render. You can use it for any kind of calculations that have to be performed prior to the actual rendering; so, the render-method itself is as lightweight as possible. In a game you might also use this method to update your game-physics and -state.
  3. We also implement the GLKViewDelegate that defines us our actual render-method glkView:drawInRect:.
  4. Also, don’t forget to import the GLKit header-files!
Additionally, we have defined a member of type GLKBaseEffect that we will see in action later. GLKit defines different effects that basically bundle vertex- and fragment shaders internally, and allow you to easily set the uniforms of the shaders via convenient properties.
GLKBaseEffect is the class that gives use the OpenGL ES 1 fixed-function pipeline very conveniently in the OpenGL ES 2 context. It will internally load the right shaders that implements the lightning-, texture- and material-model of the fixed-function pipeline. Lightning setup is no longer done via glLight(), glLightModel() and friends but with the methods/properties defined on GLKBaseEffect. Have a look in the API for details. We will shortly see the basics on setting up the modelview and projection matrices as only one example.
The next small step is to select the storyboard-file (this is a new feature of iOS 5 and basically a bundle of NIBs; so don’t be suprised that there is no MainMenu.xib) and make the view an instance of GLKView via the inspector on the right side. You might have to repeat this for the iPhone- or iPad-storyboard file depending on if you want use/test both.

Make the view an instance of GLKView

Reimplementing the Swinging Square

You might know the standard OpenGL ES template in XCode, that displays a swinging, multi-colored square. We will reimplement this with the help of GLKit and the GLKBaseEffect. Once you have that running, you have a minimal GLKit template and we have a good basis for the next part of the series.

Let’s first review the viewDidLoad-method:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
- (void)viewDidLoad
{
    [super viewDidLoad];

    EAGLContext *aContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];

    GLKView *glkView = (GLKView *)self.view;
    glkView.delegate = self;
    glkView.context = aContext;

    glkView.drawableColorFormat = GLKViewDrawableColorFormatRGBA8888;
    glkView.drawableDepthFormat = GLKViewDrawableDepthFormat16;
    glkView.drawableMultisample = GLKViewDrawableMultisample4X;

    self.delegate = self;
    self.preferredFramesPerSecond = 30;

    effect = [[GLKBaseEffect alloc] init];

    glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
}

First, we set up the OpenGL Context (EAGLContext) for OpenGL ES 2 and set it on the GLKView (line 9). We also define us as the delegate for the GLKView and set some properties on the view so it can set up the framebuffer correctly.

From line 15 one, we first set us as the delegate for ourself (always good if you know how to help yourself!) and set the prefered framerate the GLKViewController will try to manage for us.

Oh; and we create an instance of GLKBaseEffect. If we look into the implementation of the GLKViewController delegate-method, we see what we can do with this effect-class:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
- (void)glkViewControllerUpdate:(GLKViewController *)controller
{
    static float transY = 0.0f;
    float y = sinf(transY)/2.0f;
    transY += 0.175f;

    GLKMatrix4 modelview = GLKMatrix4MakeTranslation(0, y, -5.0f);
    effect.transform.modelviewMatrix = modelview;

    GLfloat ratio = self.view.bounds.size.width/self.view.bounds.size.height;
    GLKMatrix4 projection = GLKMatrix4MakePerspective(45.0f, ratio, 0.1f, 20.0f);
    effect.transform.projectionMatrix = projection;
}

As OpenGL ES 2 is missing all the APIs to easily manipulate the modelview and projection matrix (except from within the vertex-shader), GLKit defines a rich set of methods to create and manipulate matrices. So, equivalent to the code

glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslate3f(0, y, -5.0f);

we can do a GLKMatrixMakeTranslation() to create the translation-matrix and then set it on our GLKBaseEffect effect.transform.modelview-property. The internals will make sure to hand this over to the vertex-shader.

Same for defining our projection-matrix. Instead of glPerspective() in good ole OpenGL ES 1, we use GLKMatrix4MakePerspective() and set it on the effect-instance so these uniforms are internally passed to the shaders.

In fact, in the render-method, the first thing we have to do is call prepareToDraw on our GLKBaseEffect. Here the magic happends and the instance will bind uniforms/attributes that are internally defined and link the shaders. After that, it is rather standard OpenGL ES 2 code that defines verticies and colours for the vertecies and sticks them into the the standard glVertexAttribPointer-methods to feed them in the vertex-shader. Note though, that we have to use the GLKit constants GLKVertexAttribPosition and GLKVertexAttribColor so GLKit binds the attributes correctly to the variables in the shaders.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
    [effect prepareToDraw];

    static const GLfloat squareVertices[] = {
        -0.5f, -0.5f, 1,
        0.5f, -0.5f, 1,
        -0.5f,  0.5f, 1,
        0.5f,  0.5f, 1
    };

    static const GLubyte squareColors[] = {
        255, 255,   0, 255,
        0,   255, 255, 255,
        0,     0,   0,   0,
        255,   0, 255, 255,
    };

    glClear(GL_COLOR_BUFFER_BIT);

    glEnableVertexAttribArray(GLKVertexAttribPosition);
    glEnableVertexAttribArray(GLKVertexAttribColor);

    glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 0, squareVertices);
    glVertexAttribPointer(GLKVertexAttribColor, 4, GL_UNSIGNED_BYTE, GL_TRUE, 0, squareColors);

    glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);

    glDisableVertexAttribArray(GLKVertexAttribPosition);
    glDisableVertexAttribArray(GLKVertexAttribColor);
}

And, due to the glDrawArrays()-call we actually should see the swinging square when your run the project in the simulator. You can find the full project over at github.

The final result

So you see, GLKit is a quiet nice API that makes the transition to OpenGL ES 2 not so harsh. You get a great matrix-library and can also use the fixed-function pipeline from OpenGL ES 1 for some rendering where you are just fine with those capabilities.

And there is more: A class for easy texture-loading (GLKTextureLoader; no copying of  Texture2D into your project as the first action), skybox-effects (GLKReflectionMapEffect) and Quaternions (GLKQuaternion). All stuff that you normally have to redo/reimport into your project to even get started.

What I was missing a bit at first was a base-class to derive and define your own effects. So, basically, get at least some help in loading and linking the shaders with a nice object-oriented API (there only is a protocol GLKNamedEffect you can implement to define your own effects). But this is only a minor point compared to all the other stuff you get for free. And, I assume we can look forward to some quiet nice additional effect in iOS 6+.

Blender Links & Hints

In the next part, we will start to use some advanced Blender scripting features. So, I will assume some basic blender knowledge up front. Here comes a list of what has helped me a lot to get started:

  • The best resources can be found directly at blender.org in the Tutorials section. I basically did the introductory series and some advanced tutorials (1, 2) to model static objects. Once you have done the steps in the webcasts on your own, you should have a good overview of the basic blender features and check on advanced topics if you like. And don’t worry if you don’t get everything in the advanced tutorials; every part you get is great, the rest will come later.
  • A great online-resource is the Blender 3D: Noob to Pro series.
  • There is a great number of Blender video tutorials out there. Unfortunately, a lot are for older version. As the interface has been changed over time, be sure to look for the right tutorials; i.e. the interface in the webcasts should look the same as for your Blender version (presumable 2.5X) or you might not get the most out of it.
  • Blender has a lot of keyboard shortcuts and mastering them is the key in becoming a Blender guru. This keyboard shortcut sheet helped me a lot in that regard.
  • If you read this, chances are high that you work on a MacBook. Unfortunately, Blender makes good use of the numerical keypad and the third mouse-button for changing the viewport in the 3D view. So, I recommend using a three-button mouse for modeling. To at least emulate the 3-button mouse and the numerical keypad, there exist useful settings in the Preferences of Blender (File -> User Preferences): Check “Emulate 3-button mouse” to use Alt-left-mouse-button to rotate the 3D view and “Emulate numpad” to use the standard number-keys as a replacement for the num-pad. Don’t forget to press “Save as Default” if you want Blender to remember this until the next launch.

Check "Emulate 3-button mouse" and "Emulate numpad" on your MacBook

Have fun modeling. You will see Blender is just an amazing tool.

Written by 38leinad

October 19, 2011 at 10:36 am