I have committed a first version of the BHM 3D skinning sample to SourceForge.
You can find it here: https://sourceforge.net/p/bhmfileformat/code/ci/default/tree/BHM3DSample/
The code is not fully commented yet, I’m still working on that (it is Doxygen-compliant so far though, so try running Doxygen on it). But I have written a README.txt file, which explains the basic ideas behind the design of the file format, the rendering/animation framework, and the implementation of the matrix palette skinning process. It should guide you through the code, as I don’t think the code is all that large and complicated. Most of the OpenGL-related code is hidden away in deeper layers, so most of it can be read and understood without any API-crud getting in the way, so you can focus on the actual BHM data and how the animation and rendering work.
BHM 3D Sample
This sample is part of the BHM File Format project. It demonstrates how BHM can be used for storing 3D geometry and animation.
The sourcecode is released under the BSD license. For more information, see COPYRIGHT.txt.
The code is written in a portable style, and has been tested with Microsoft Visual Studio and gcc. It was tested under Windows XP, Vista, Windows 7, linux and FreeBSD.
In order to build this sample, the following libraries need to be installed on the system (the development version, if applicable):
– OpenGL (or compatible, such as MesaGL)
– GLUT (or an alternative such as freeglut)
– GLUX (included in this distribution)
In order to run the sample, the OpenGL implementation needs to support the following extensions:
– ARB_shader_objects/ARB_vertex_shader/ARB_fragment_shader (for the GLSL version)
– ARB_vertex_program/ARB_fragment_program (for the assembly version)
Strictly speaking, the VBO and multitexture extensions are not required for this particular sample, and the code can easily be modified to work without them. However, it is highly unlikely that hardware supporting the shader/program extensions does not support VBO and multitexture. Therefore they are left in, so the OpenGL framework gets the best possible performance, and can easily be applied for more complex rendering (perhaps for future samples or projects).
Overview of the framework
The framework can be seen as two layers on top of OpenGL:
1) The ‘GLUX’ layer
This layer provides some basic datastructures and helper functions/wrappers to make OpenGL more accessible. There are helpers for texture loading, shader loading, compiling and linking, and basic mathematics (vectors, matrices and quaternions).
The functions and datastructures are mostly C-style, resembling OpenGL’s C-style procedural API. The GLUX layer is modeled after Microsoft’s D3DX layer for Direct3D, as it provides similar functionality.
The GLUX layer also contains the basis for the next layer, by providing an interface for reference-counting and a template for wrapping OpenGL resources into reference-counted objects with automatic cleanup.
GLUX is designed as a stand-alone API, and can be used by anyone who wants to develop their own OpenGL framework, but doesn’t want to re-invent the wheel for loading textures, shaders and such, or for basic maths.
2) The object-oriented layer.
This layer wraps the OpenGL and related APIs into an actual rendering framework. By using objects, the functionality can easily be grouped with the relevant data, and management and re-use of data/objects can be implemented cleanly by making use of standard constructors, destructors and common design patterns such as factory methods and reference counting.
Certain functionality can also be extended or modified easily by using inheritance.
A list of the relevant objects, as they are defined within this framework:
– A Mesh is a list of primitives (in this case triangles).
– A Material is a combination of material colours, textures and shaders, which form the visual appearance of a material together.
– A Movable is an object that can be positioned and animated in the world by a transformation matrix.
– An Object is a Movable which contains a Material and a list of Meshes. Together they form a geometric object which can be animated and rendered into the world.
An Object also contains a list of child-objects. In other words, it is recursive. This way, an object can be constructed from a set of smaller objects, which can be made of different Materials and/or may be movable relative to the parent object. In essence this means that the entire world can be represented as one Object.
– A BHMObject is a subclass of Object, which implements loading from a BHM file. A single BHMObject represents the entire contents of the BHM file, where all of the objects stored in the BHM file are child-objects.
Overview of the BHM file format for 3D geometry
The BHM file format is hierarchical in nature, just like the XML file format. Therefore it is quite easy to express the above objects in a BHM file. The hierarchy can easily represent objects and child-objects, and link meshes, materials and animation data to a specific object.
The following node types are defined:
The root node. All children of this node are part of the BHM 3D data.
A camera node. This node contains a matrix and a set of variables (camera state) which describe a camera.
An object node. This node contains the zero-terminated name of the object, and a list of child nodes. These child nodes describe the geometry of the object.
Other BHMID_OBJECT nodes can also be child nodes of a BHMID_OBJECT node. This way the hierarchical scene graph can be built up.
A matrix node. This node contains a 4×4 matrix.
An index list node. This node contains a list of 16-bit indices into a vertex buffer, for indexed primitive rendering.
A position list node. This node contains a list of 32-bit floats, grouped by x, y, z, which form the position information for a vertex buffer.
A normal vector list node. This node contains a list of 32-bit floats, grouped by x, y, z, which form the normal vector information for a vertex buffer.
A tangent vector list node. This node contains a list of 32-bit floats, grouped by x, y, z, which form the tangent vector information for a vertex buffer.
A binormal vector list node. This node contains a list of 32-bit floats, grouped by x, y, z, which form the binormal vector information for a vertex buffer.
A 2D texture coordinate list node. This node contains a list of 32-bit floats, grouped by u, v, which form the 2D texture coordinate information for a vertex buffer.
A position track node. This node contains a list of position key frames.
A rotation track node. This node contains a list of rotation key frames.
A bone list node. This node contains a list of pairs of zero-terminated bone names and matrices.
The bone names refer to the bone objects which form the skeleton for the object. The skeleton is applied for matrix palette vertex skinning animation. The matrices store the ‘rest-pose’ for each bone.
A bone weight list node. This node contains a list of blend weight and matrix palette index pairs, which are used during matrix palette vertex skinning.
Since each of the vertex attributes (such as position, normal, texture coordinates..) is stored in a separate node, the format is very flexible. An object does not necessarily need to contain all the attributes. For example, if the object is not textured, there does not have to be a BHMID_TEX2D_LIST. If an object is not skinned, it does not need a BHMID_BONE_LIST and BHMID_WEIGHT_LIST. Only the data that is actually required, is stored in the file.
From the application side, not all nodes have to be used. If an application does not support certain vertex attributes or other data, it can simply skip over the unknown node types, and only use the data that it understands. In many cases, the basic position, normal and texture coordinates will yield valid, usable geometry without using any additional vertex data that may be stored in the file, so the application will continue to function as normal.
This makes the file format reasonably backwards-compatible, while also being future-proof. New node types, adding extra functionality, can be freely introduced without having to change the entire file format, and the entire read/write routines.
This is very similar to how an XML parser can simply skip over unknown tags, and how an XML format can easily be extended by adding new tags.
Matrix palette vertex skinning
The process of matrix palette vertex skinning is quite intuitive:
– The artist models a skeleton object inside of the object that is to be skinned.
– The artist then defines how much effect each bone of the skeleton has on each vertex in the object.
– The artist then applies animation to the skeleton.
– The skinning algorithm will automatically ‘wrap’ the object around the animated skeleton.
In this example, the skeleton inside the file is just like any other object. The object was modeled using 3dsmax, which treats a skeleton like normal geometry, and therefore the geometry of the skeleton was also exported to the file. It can easily be made visible, but by default the geometry for bones is not imported by the sample.
Like any other object, the skeleton is a hierarchy of Objects, and each object is a Movable. Therefore the skeleton is animated like any other object. The skinned object is linked to these Movables by searching for the objects with the names in the bone list from the BHMID_BONE_LIST node.
The rest pose matrices describe the position and orientation of each bone in ‘rest’, as in the initial pose. During initialization, they are multiplied with the world matrix for the skinned object, to compensate for a different position and orientation of the skinned object relative to the skeleton, if applicable.
When rendering a frame, first the animation of the skeleton is processed. The Movable baseclass will calculate the interpolated position and rotation based on the keyframe information, and store the transform matrix.
Then the skinned object can be rendered. The object has references to all the bone Movables, so it can access their transform matrices.
Each transform matrix will be multiplied with the inverse of the corresponding rest matrix, in order to bring the bone from its own world space into the world space of the skinned object. The resulting set of transform matrices forms our matrix palette, and is now passed on to the vertex shader stage.
If one were to pick one of these transform matrices, and use this as the world matrix for the entire object, the object would appear to be ‘stuck to’ the particular bone, and follow its movement exactly.
Each vertex contains a list of 4 vertex blend weights and 4 matrix indices, which encode how much effect each bone has on the vertex This means that up to 4 matrices can affect the vertex at a time (less than 4 matrices is possible, when 1 or more weights are 0).
Applying the matrix palette is a form of linear interpolation.
The matrix indices indicate which 4 bones affect this particular vertex, or in other words, which 4 matrices in the palette will be used.
First the position of the vertex is calculated for each of these 4 matrices. Then the vertex blend weights are applied to calculate the weighted average of these 4 positions. This is done by simply multiplying each position by the weight, and then adding all positions together. The result is the skinned vertex in world space.
Now this vertex in world space can be multiplied by the view and projection matrices as usual, to bring it into clip space, and send it off to the rasterizer.
The process for normal vectors is analogous to the vertices. Apply the transform for each bone, and calculate the weighted average.
The texture coordinates will simply be copied as-is, since they are not dependent on the position of the vertex.