fruitfly

a technical blog

Practical Blender with GLKit – Part 2 – Blender Scripting with Python

with 12 comments

Welcome to the second part of this practical series on Blender for the use with GLKit-based iOS-applications. The first part focused on setting up a basic GLKit Xcode-project as the prerequisite for the following parts. As it is a quiet new API, I think it was worth it. I also tried to give some links and tipps for learning the basics of Blender modeling.

Within this part, we will directly jump into the Python API of Blender. So, for this part I assumed you already have tinkered with Blender, done some basic modeling and find your way around the UI.  I will also assume you have some basic knowledge of the Python scripting language. If not, I recommend googling for “python in 10 minutes” . We will only be using the most basic features of the language.

In contrast to my initial plans, this is a rather lengthy post as we have to introduce some basics, will go the full way to already write a very simple export-functionality from Blender  and also write corresponding import-functionality for our iOS application, respectively. My feeling while writing was that it wraps things up much more nicely if we actually have some kind of result in our iOS application to show of at the end of this part.

Introduction

Depending on how close a look at Blender you had already, you might actually know that it comes with its own python runtime in the application bundle. The whole application is written in python (correct me if I am wrong) including all operation and actions that you can trigger via the user-interface. If you have a look into the application-bundle, you will find the subfolder /Applications/blender.app/Contents/MacOS/2.59/scripts/ containing lots of python script-files. “addons” contains a lot of export scripts for different kind of model formats. This is a very rich stash of code snippets you should be able to understand after this post.

When it comes to writing and executing your own Python scripts and tinker with the API you basically have three different options:

  1. Start Blender with a Python script for execution.
  2. Write a python script within the integrated Text Editor of Blender and execute it with the push of a button.
  3. Use the interactive Python command-line to write and execute a script line-by-line.
Up until now I have never used approach 1 because I am still in the learning phase. This makes sense for a bulk operation on many blender-model files though, I assume. Approach 2 we will use later (You can still use an external editor to write the script) to execute a whole script of operations on our Blender model. Approach 3 is really the one that makes exploring the API very easy. You can step-by-step explore operations, see the changes to your model and copy the lines into a full script-file if the result was as expected.

Python Console

Lets have a close look at approach 3 and explore data-structures and operations via the powerful Python Console. Open up this Blender model of the apple-logo. If you still have the default interface-layout, select the button in the lower-left corner of the window that should show the animation time-line. Change it to the Python Console like in the following picture.

Change one of the subviews to the interactive “Python Console”

This is the interactive Python Console/REPL; just like the regular Python Console. Read the blue output: Pressing Up/Down to go back and forth in the command-history is not a so special feature, but Auto-completion is really the killer-feature to quickly discover the API. Have a try and type:
>>> bpy.
and hit Ctrl-Space. You should see a list of possible sub-modules you can expand the command with. Type
>>> bpy.context.
and hit Ctrl-Space again. You see a list of all methods and properties you can call on the context. Make it
>>> bpy.context.scene.objects
bpy.data.scenes["Scene"].objects 
and press Enter. This prints the list-object containing all objects in the scene. Unfortunately, it is no native list so it does not actually list its contents. Go one back in the command history to display the command again and enclose it in a list-call: 
>>> list(bpy.context.scene.objects)
[bpy.data.objects["Point"], bpy.data.objects["Apple"], bpy.data.objects["Camera"]]
If you execute this, you should see a list of the objects in the scene. For me, the scene consists of three objects named “Point”, “Apple” and “Camera”. The first is a light-source, the second our apple-logo model and the last is the camera object in the scene.
But what if you don’t know this ahead and need to find out in one of your scripts what type an object is (for a simple model exporter that we will write later, we are only interest in exporting the actual model)? Well, we can try to ask for the type. Let’s do it the Python-way:
>>> for o in bpy.context.scene.objects: type(o) 
... <Press Enter>
<class 'bpy_types.Object'>
<class 'bpy_types.Object'>
<class 'bpy_types.Object'>

Ok, so obviously, many objects are represented by the type bpy_types.Object in Blender. But there is also a property “type” on bpy_type.Object (which you can find easily with the auto-completion feature). Lets try it:

>>> for o in bpy.context.scene.objects: o.type
... 
'LAMP'
'MESH'
'CAMERA'

So, if we are interest in the camera, we can simply check for the type-property of the object. Actually, we do not bother so much for the camera (and the light), but we want to work with the apple-logo which is of type “MESH”; basically representing a set of polygons. Assign it to a variable by executing this:

>>> apple = bpy.data.objects["Apple"]

We will use this variable soon when we are going to explore what we can do with our model. But first, let’s get some general introduction to the different modules of the API.

Common Modules

“bpy” is the main module of the python API. It contains most other blender-related sub-modules to access and interact with almost everything you can also do with a press of a button or other interface element in the UI. The most important modules are:

The context – bpy.context

This basically gives you access to the current state of the blender UI and is necessary to understand as we will use our scripts like Macros you might know from other application. If we switch a mode, we will actually see a change in the Blender UI; this is represented in the context.

  • bpy.context.mode: What is the current mode of Blender? When you work with your models and are a beginner like me, you are mostly either in Object- or Edit-mode. So, if you access this property, you should get “OBJECT” or “EDIT” as return-value.
  • bpy.context.active_object: What object is currently active? To be truthful, I haven’t found much use for this property yet, as I mostly only have one object selected at any time; in that case, the active-object is also the selected object. Which you can query with the next property.
  • bpy.context.selected_objects: Which objects are currently selected? Select/deselect the apple-logo or the camera and see as the property changes.
  • bpy.context.scene: We already played with this before. It gives you access to all parts of the scene. So, whatever you see in the Outliner-view, you can access it from here.

The Outliner-view

The data – bpy.data

This submodule gives you access to all data that is part of the currently opened Blender-file (.blend). This might be a super-set of what is available through the context as you might create objects programmatically that do not show up in the scene. If you try it for now, you should actually get the same list of objects as in the scene:

>>> list(bpy.data.objects)
[bpy.data.objects["Apple"], bpy.data.objects["Camera"], bpy.data.objects["Point"]]

When we start to make copies of objects to work on, this list will temporarily contain more objects.

The operations – bpy.ops

If you have been playing around yourself with the above introduced property “bpy.context.mode” you might have noticed that it is not possible to set it. It is read-only and changing it requires to call a corresponding operation from the “bpy.ops” module. In specific, the operations are grouped within submodules again. To operate on an object (bpy_types.Object), check for “bpy.ops.object”, for operations on a mesh check “bpy.ops.mesh”. We will later see how easy it is to find the Python API call for a button/element in the Blender UI. But, let’s play around with some basic operations first: 

>>> bpy.ops.object.mode_set(mode='EDIT')
{'FINISHED'}
>>> bpy.ops.object.mode_set(mode='OBJECT')
{'FINISHED'}

If you were in Object-mode before executing these commands, you should have seen that the Blender UI switched to Edit-mode and with the second command back to Object-mode. This is an important command, because you only can perform certain operations in Edit-mode, just like you are only able to do certain things in the UI when you are in a specific mode. It also depends on the object currently selected what modes are available. Select the camera-object via the UI (right mouse-click) and see that the call which worked before now fails:

>>> bpy.ops.object.mode_set(mode='EDIT')
Traceback (most recent call last):
 File "<blender_console>", line 1, in <module>
 File "/Applications/blender.app/Contents/MacOS/2.59/scripts/modules/bpy/ops.py", line 179, in __call__
 ret = op_call(self.idname_py(), None, kw)
TypeError: Converting py args to operator properties: enum "EDIT" not found in ('OBJECT')

Scripting Workflow

We will see that whole experience of Python scripting in Blender is very beginner-friendly; and if you follow a specific workflow you can very easily develop rather complex scripts in no time.

I will show you this workflow based on a simple example. We now want to do one very common operation that we usually will need to export a model for  use with OpenGL ES: Triangulation (OpenGL ES only allows us to draw triangles and not quads like in standard OpenGL). It is a concret example but might also help you in the future for a lot other problems:

  1. Do your steps manually: Find out what you want to do with the help of the UI. I.e. Select the apple-logo model, switch into Edit-mode, select all vertices (Select -> Deselect/Select All) and triangulate via Mesh -> Faces -> Quads to Tris. This should triangulate our model (Actually, most part of the apple is already triangulated; only the outer rim is quads). If that worked, undo the steps. If you know you can do it in the UI, you for sure can do it also easily via the Python API.

    Select all vertices with Select -> Select/Deselect All

    Triangulate via Mesh -> Faces -> Quads to Tris

  2. Do it step-by-step in the Python-console: The manual steps worked fine, so let’s try it step by step in the Python console again. Some steps we have actually already done before:
    >>> apple.select = True
    >>> bpy.ops.object.mode_set(mode='EDIT')
    {'FINISHED'}
    But now: How to select all vertices and how to call the "Quads to Tris" operation? Well, when you did the steps manually, you might have seen something in Blender that is quiet neat. While hovering over an operation in the menu, you see the Python API call that lies behind it (see the last two screenshots). For "Select/Deselect All", bpy.ops.mesh.select_all is shown.
    >>> bpy.ops.mesh.select_all( <Press Ctrl-Space>
     select_all()
    bpy.ops.mesh.select_all(action='TOGGLE')

    You see, that I first pressed Ctrl-Space after the function-name so I see the doc-string and thus know which parameters are required for the function. I noticed that it says “action=’TOGGLE'” in the example. That sounds a little error-prone, because it might actually deselect everything if we have already some vertices selected before this operation (for whatever reason). Let’s maybe look this one up in the API reference to make sure we do not deselect but select all: Go to Help -> API Reference in the Info-view. A browser should open up with the documentation. Click on Operators (bpy.types) and further click on Mesh Operators on the next page. We see that ‘SELECT’ is the action that we want.

    >>> bpy.ops.mesh.select_all(action='SELECT')
    {'FINISHED'}

    For our last step, the actual triangulation, we can also see from the menu (see screenshot above) that the Python-call is bpy.ops.mesh.quads_to_tris:

    >>> bpy.ops.mesh.quads_convert_to_tris()
    {'FINISHED'}
    We did it. And it was not so hard because of this unbelieavably neat feature of giving us hints on the Python API directly within the UI. This works for many different things. E.g. if you are in Object-mode and hover over one of the XYZ-sliders in the Transform/Location-section it shows "Object.location". So, we directly know to which property of the Object-type it maps to: For our model, it is apple.location (actually there is something more to it to actually apply the transform to our underlying mesh, but we will see this in the next part of the series).

    Hovering over interface elements will give you a hint on the underlying Python API call

    I just cannot stop pointing out how neat this feature is 🙂

    Now, we just have to package it up into one Python script which we can execute as a whole; which is the next step.

  3. Create a script of the individual steps/commands from step 2. Basically, you can paste the individual commands from step 2 into a text-file line-by-line to get a very static script. Enrich it with some variable/if-then-else-cases and you are done.

You should have gotten the general idea of the three-step approach. Step 3 was only theory, though. We will now see how it can be done in practice with the Python Text Editor.

Python Text Editor

Take all the steps we have executed on the Python-console and put them in a file “my_simple_exporter.py” with your favorite text-editor for python and save the file (we do the actual export-stuff later). The script should look something like this:

1
2
3
4
5
apple = bpy.data.objects["Apple"]
apple.select = True
bpy.ops.object.mode_set(mode='EDIT')
bpy.ops.mesh.select_all(action='SELECT')
bpy.ops.mesh.quads_convert_to_tris()

Lets open it up in Blender. For one of the views (I took the one from the Python Console) change it to “Text Editor” like this:

Switching to the Blender “Text Editor” view

Go to “Text -> Open Text Block” to open our script-file. Note that Blender only has a reference to that external file so that you can still do all your editing with your favorite text-editor. Blender will notice any changes (a small safety-ring is shown) and allow you to reload the file like this:

Blender will pick up on changes to the script and allow an easy reload

Let’s try to run the script via the “Run Script” button. You will see that Blender complains and nothing really happens. Our script failed. But why? Bad thing is that we cannot really see what Blender complained about. The Console.app should normally come to the rescue, but due to some bug/different behavior on MacOS the output is only written to the log-daemon after Blender is closed. At least that is how it is for me and I have read other people had the same problem. The way I worked around this is by opening up Blender always via the Terminal. The Terminal then directly will print our error.

Save the Blender file and close Blender. Open Blender again via the Terminal. For me, Blender is opened with this path:

/Applications/blender.app/Contents/MacOS/blender

Go into the Blender Text Editor, select my_simple_exporter.py and press “Run Script”. The following output should show up within the terminal:

Traceback (most recent call last): 
 File "/Users/daniel/Desktop/Blender/Session2/apple_for_logo.blend/my_simple_exporter.py", line 1, in <module> 
NameError: name 'bpy' is not defined

Well, if you are an advanced python developer, you might have seen this coming right from the beginning: We have forgotten to import the required modules. The Python console already imports this for us by default; if we create our own script-file, we have to do it ourselves. Add the following line to the beginning of you script-file:

import bpy

Try again. This time it should have worked fine.

The Mesh – Vertices, Faces, Normals

You might remember when we called apple.type, that Blender was telling us our model is of type “MESH”, but still all objects (model, camera and light) where of the Python-type “bpy_types.Object”. For sure all these thing should have different properties: A light should have properties like luminance, a camera has a direction it is facing to and a mesh should have vertices, edges, faces, normals, etc.

Blender hides this specifics behind the data-property. With it, you get access to all these type-specifics. Have a try and check the types of the data-property for camera, light and model:

>>> for k in bpy.context.scene.objects.keys(): print('%s has a data-property of type %s' % (k, type(bpy.context.scene.objects[k].data)))
... <Press Enter>
Point has a data-property of type <class 'bpy.types.PointLamp'>
Apple has a data-property of type <class 'bpy_types.Mesh'>
Camera has a data-property of type <class 'bpy.types.Camera'>

As we want to exporting our model, we are most-interested in the type “bpy_types.Mesh”. You can use the trick with auto-completion to see a full list, or check on the Python API reference we used before. The properties we are most interest in for now are “vertices” and “faces”.

  • Mesh.vertices gives us a list of vertices (bpy.types.MeshVertex) in the model. A MeshVertex has as property “co” that gives us access to its coordinate and a property “normal” for the normal.
  • Mesh.faces gives us a list of faces (bpy_types.MeshFace). Each face again has a property “vertices” that gives us a list of the indices into the Mesh.vertices-list. These indices define the face. After triangulating our model, we should only have three elements in each of the face’s vertices-lists.

Let’s try to list all faces with their vertex-coordindates:

>>> for (i,f) in enumerate(apple.data.faces): <Press Enter>
... for vertex_index in f.vertices: print('Face %d has has vertex %s' % (i, apple.data.vertices[vertex_index].co)) <Press Enter>
... <Press Enter>
Face 0 has has vertex Vector((-0.27777254581451416, 0.23545005917549133, 5.628979206085205))
Face 0 has has vertex Vector((-0.2777732014656067, -0.17912480235099792, 4.809228897094727))
Face 0 has has vertex Vector((-0.27777382731437683, 0.802850067615509, 4.097628593444824))
Face 1 has has vertex Vector((-0.2777732014656067, -0.17912480235099792, 4.809228897094727))
Face 1 has has vertex Vector((-0.27777382731437683, -0.233199805021286, 3.773179292678833))
Face 1 has has vertex Vector((-0.27777382731437683, 0.802850067615509, 4.097628593444824))
<and a lot more...>

Note that I have inserted a tab in the second line (firstline that starts with “…”) for this to work.

UPDATE: With Blender 2.6, the property “faces” has been renamed to “polygons”.

The first Export Script

With basic Python file-IO knowledge, we now have all the information we need to write a very basic model-exporter. Open up my_first_exporter.py within the Python Text Editor or your external Text-Editor and modify it so it looks as follows:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
import bpy
import os

# Change if file should be written some place else
file = open(os.path.expanduser("~/Desktop/mymodel.mdl"), "w")

model_object = None
model_mesh = None

# Search for the first object of type MESH
for obj in bpy.context.scene.objects:
    if obj.type == 'MESH':
        model_object = obj
        model_mesh = obj.data

# Triangulate
model_object.select = True
bpy.ops.object.mode_set(mode='EDIT')
bpy.ops.mesh.select_all(action='SELECT')
bpy.ops.mesh.quads_convert_to_tris()

for face in model_mesh.faces:
    for vertex_index in face.vertices:
        vertex = model_mesh.vertices[vertex_index]
        # Write each vertex in a seperate line with x,y,z seperated by a tab
        file.write(str(vertex.co[0]))
        file.write('\t')
        file.write(str(vertex.co[1]))
        file.write('\t')
        file.write(str(vertex.co[2]))
        file.write('\n')
There are basically three blocks of code:
  1. We find the first object in our Blender file that is of type “MESH”. This is a little bit more generic then the code we had before where we just used the model with name “Apple”.
  2. We triangulate our mesh; this should look familiar.
  3. We traverse all faces and vertex-indices to write each vertex of a face in a new line of the output file (x,y,z are separated by tabs). As we have triangulated our mesh, we know that always three lines in our exported model-file define one face.
Execute this file within the Python Text Editor and check that a file called “mymodel.mdl” has been created on our Desktop.

Loading the Model

So, what’s left is to actually import our model into our iOS Application and render it. As the model-format is quiet straight-forward and we assume basic OpenGL ES knowledge, I will only briefly describe the steps and let the code speak for itself.

Open up the Xcode project “GLKitAndBlender” from Part 1 and create a new Objective-C class “MyModel”.

1
2
3
4
5
6
7
8
9
#import <Foundation/Foundation.h>

@interface MyModel : NSObject

- (id)initWithFilename:(NSString *)filepath;
- (BOOL)load;
- (void)render;

@end

The interface consists of an initializer that takes the path to the model to load (mymodel.mdl), a load-method to actually read the file and a render-method to do the OpenGL-calls for rendering the model to the screen:

Here is the implementation of the class:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
#import "MyModel.h"

#import <GLKit/GLKit.h>

@interface MyModel () {
@private
    NSUInteger _num_vertices;
    GLfloat *_vertices;

    NSString *_filepath;
}
@end

@implementation MyModel

- (id)initWithFilename:(NSString *)filepath
{
    self = [super init];
    if (self) {
        _filepath = filepath;
    }

    return self;
}

- (BOOL)load
{
    NSString *file_content = [NSString stringWithContentsOfFile:_filepath encoding:NSUTF8StringEncoding error:nil];
    NSArray *coordinates = [file_content componentsSeparatedByCharactersInSet:[NSCharacterSet characterSetWithCharactersInString:@"\n\t"]];
    _num_vertices = [coordinates count] / 3;
    _vertices = malloc(sizeof(GLfloat) * 3 * _num_vertices);

    NSLog(@"Number of vertices to load: %d", _num_vertices);

    int i=0;
    for (NSString *coordinate in coordinates) {
        _vertices[i++] = atof([coordinate cStringUsingEncoding:NSUTF8StringEncoding]);
    }

    NSLog(@"Model loaded");

    return YES;
}

- (void)render
{
    static const float color[] = {
        0.8f, 0.8f, 0.8f, 1.0f
    };

    glVertexAttrib4fv(GLKVertexAttribColor, color);

    glEnableVertexAttribArray(GLKVertexAttribPosition);
    glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 0, _vertices);

    glDrawArrays(GL_TRIANGLES, 0, _num_vertices);

    glDisableVertexAttribArray(GLKVertexAttribPosition);
}

- (void)dealloc
{
    free(_vertices);
}

@end
The load-method  reads each coordinate seperatly and converts it from an NSString to a GLfloat and stuffs it into the GLfloat-array under the name _vertices. The render-method then draws our model in the same way it already did for the swinging square from Part 1. We only disabled the color-attribute as we draw everything with the same grayish color.
What’s left is to actually use the MyModel-class. For this, first import the “mymodel.mdl” file into the Xcode project. Once you have done this add an instance variable “model” to the GLKitAndBlenderViewController-class header. The changes to the GLKitAndBlenderViewController-class implementation-file contain comments and can be seen below:
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
#import "GLKitAndBlenderViewController.h"

#import "MyModel.h"

@implementation GLKitAndBlenderViewController

- (void)viewDidLoad
{
    [super viewDidLoad];

    EAGLContext *aContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];

    GLKView *glkView = (GLKView *)self.view;
    glkView.delegate = self;
    glkView.context = aContext;

    glkView.drawableColorFormat = GLKViewDrawableColorFormatRGBA8888;
    glkView.drawableDepthFormat = GLKViewDrawableDepthFormat16;
    glkView.drawableMultisample = GLKViewDrawableMultisample4X;

    self.delegate = self;
    self.preferredFramesPerSecond = 30;

    effect = [[GLKBaseEffect alloc] init];

    // Load the model
    model = [[MyModel alloc] initWithFilename:[[NSBundle mainBundle] pathForResource:@"mymodel" ofType:@"mdl"]];
    [model load];

    glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
    glEnable(GL_DEPTH_TEST);
}

#pragma mark GLKViewControllerDelegate

- (void)glkViewControllerUpdate:(GLKViewController *)controller
{
    static float transY = 0.0f;
    transY += 0.175f;

    static float deg = 0.0;
    deg += 0.1;
    if (deg >= 2*M_PI) {
        deg-=2*M_PI;
    }

    static GLKMatrix4 modelview;
    modelview = GLKMatrix4Translate(GLKMatrix4Identity, 0, 0, -25.0f);
    modelview = GLKMatrix4Rotate(modelview, deg, 0.0f, 1.0f, 0.0f);

    // Correction for loaded model because in blender z-axis is facing upwards
    modelview = GLKMatrix4Rotate(modelview, -M_PI/2.0f, 0.0f, 1.0f, 0.0f);
    modelview = GLKMatrix4Rotate(modelview, -M_PI/2.0f, 1.0f, 0.0f, 0.0f);

    effect.transform.modelviewMatrix = modelview;

    static GLKMatrix4 projection;
    GLfloat ratio = self.view.bounds.size.width/self.view.bounds.size.height;
    projection = GLKMatrix4MakePerspective(45.0f, ratio, 0.1f, 100.0f);
    effect.transform.projectionMatrix = projection;}

#pragma mark GLKViewDelegate

- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
    [effect prepareToDraw];

    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

    // Render the model
    [model render];
}

@end
  1. We load the model within our viewDidLoad:-method
  2. We make some adjustments to the model-view-matrix because in contrast to OpenGL, in Blender the z-axis is facing upwards. If we would not do this, the model would be render in the wrong orientation. (We also changed the model-view matrix so the rendered apple-logo is nicely spinning)
  3. Within the glkView:drawInRect:-method we call the render-method of our model.
You can get the full Xcode project here.
If you run the application in the simulator, you should see a rotating apple-logo. The 3D-effect is not so powerful as we did not introduce lightning yet and we used the same color for all faces; but still: Well done!

The final result; imagine the logo is rotating 😉

What’s next?

I wanted to end this second part of the series with something that already shows us some nice result based on what we have learned. Thus, we have created a very simple exporter for our model, but we have missed a lot of points we have to tackel in the next part of the series:

  • When we do the triangulation of our model, we do it on the model that is stored in the Blender file itself. What we want in the future is actually to make a copy of our model so we can work on this. We might later on want to enhance/remodel and if we have triangulated the model that might make it harder. We could obviously make sure to not save the Blender model after executing the script or always press Undo, but there are better ways.
  • The export-script we have written is no real Blender Export script that appears under File -> Export. We should change this so we can also offer the user a dialog where to actual store our exported model and under which name. After all, we might work with a designer that does not want to modify python-code to change the location of a saved file.
  • In Blender the z-axis is pointing in the upwards direction. This is different to OpenGL where the y-axis is the one facing upwards. If we don’t want to correct the orientation with the model-view-matrix in our OpenGL code, we have to enhance the export-script to do this conversion already.
  • Our model-format at the moment is very inefficient as we have a lot of duplicate vertices in our model-file (adjacent faces have common vertices). Second, exporting to a binary format format spares us the string-to-float conversion and would be much faster for loading the model. Third, the final result in the iOS simulator does not look very 3Dish yet. This is due to the missing normals for the the lightning calculations. Also, textures/materials for the faces are missing.
You see, we have a lot to enhance here. Stay tuned for the next part of the series where we will tackle all of these points (except materials/textues; that’s for an own part).

Update:

I accidentally checked in the apple_logo.blend-file while in EDIT-mode. Unfortunately, the Object.data is never updated, until this mode is left again. So, even though, the model looks triangulated in Blender, the underlying data isn’t and the export-script will not export triangles. I updated the file in github, but you can just go back to OBJECT-mode and try the export again if you already download the files before. Thanks to Johnson Tsay for noticing this!

Written by 38leinad

November 2, 2011 at 9:54 am

12 Responses

Subscribe to comments with RSS.

  1. Hi 38leinad,

    I opened the apple_logo.blend with Blender 2.59 and tried to
    generate the mymodel.mdl with the posted Python script,
    somehow the model file size is not equal with yours,
    yours has 951 lines(dividable by 3), the one generated from my Blender
    is only 863 lines(not dividable by 3), would you give me some ideas how
    this can happen, thanks!

    Johnson Tsay

    Johnson Tsay

    November 10, 2011 at 7:53 pm

    • Hi Johnson,
      I made a mistake for the checked in blend-file. I triangulated it, but never went out of edit-mode before saving and pushing to github. So, even though the model in blender looks triangulated, the underlying data the export-script works on, still access faces that have 4 vertices and thus produces a different result.
      I checked in the blend-file again, this time in Object-mode. You, just have to leave EDIT-mode and if you try again, it should be the expected 951 lines.

      Thanks for noticing and pointing it out.

      Cheers,
      Daniel

      38leinad

      November 11, 2011 at 2:08 am

  2. […] the new realtime ray-traced render looks pretty impressive. As I have been doing the Practical GLKit with Blender series on Blender 2.5 up to now, I was also interest in how much the Python API has changed in thew […]

  3. Hi,

    Great tutorials! Loving them!!
    When are you gonna bring out the next one? I can’t wait to read more/learn more.
    I’ve totally overlooked python and now am loving it.

    JoshM

    December 22, 2011 at 6:51 am

    • Hi JoshM,
      Thanks for the kind words! Yeah, I also overlooked python at first 🙂

      I am planning to post the next part in January. As I am also constantly learning while writing these posts I try to keep 1-2 post ahead in regard of the topics I am covering in the posts. Otherwise, I fear I will have to make major revisions to the past posts…

      Cheers,
      Daniel

      38leinad

      December 25, 2011 at 12:14 pm

  4. Hi,

    I’ve enjoyed the first two installments of this tutorial, and looking forward to part 3 and especially part 4( exporting the material/textures).

    Do you know when you’ll be adding these?

    Cheers and thank you.

    Wayne

    December 25, 2011 at 1:03 am

    • Hi Wayne,
      Thanks for the positive comment. Part 3 is planned for January. For Part 4 I can’t tell yet. You have to push me if it takes too long 😉

      Cheers,
      Daniel

      38leinad

      December 25, 2011 at 12:18 pm

      • Hi Daniel,

        Have you have a chance to post part 3 or 4… I’ve tried a google search but couldn’t seem to locate them.

        Thanks again,
        Wayne

        Wayne

        February 22, 2012 at 3:08 pm

  5. HI this is a great tutorial.

    I’m looking foward to part 3!

    Cheers

    Barry

    Barry

    January 16, 2012 at 3:31 am

  6. Great Stuff !! I’m running into trouble on this Tutorial though running Xcode 4.3 and blender 2.62 it appears during the compile it’s seg faulting when it tries to compile the MyModel files.

    Kevin Jones

    March 6, 2012 at 8:03 pm

    • Great Stuff !! I’m running into trouble on this Tutorial though running Xcode 4.3 and blender 2.62 it appears during the compile it’s seg faulting when it tries to compile the MyModel files.

      Ok Nevermind. I downloaded your project and it works so it must be something I did.
      Best Regards,
      Kev

      Kevin Jones

      March 6, 2012 at 8:08 pm

  7. is it possible to add an UIToolbar to a GLKView?

    ruysch

    April 1, 2012 at 10:39 am


Any thoughts?