Hey there,
this weekend I focused on animations. Animations are especially important for character in games as they deform the mesh and make the character “move”. There are basically two different ways of animating a 3D model: morph targets and skeletal animations. The former stores the vertex positions (and normals) multiple times for different poses and interpolates them at runtime. This technique is very easy to implement, however, storing each vertex multiple times increases the size of the mesh drastically. This is something I wanted to avoid as download time is very critical in browser games.
When using skeletal animations, a skeleton is created for the mesh and each vertex gets assigned to one or multiple bones. If then one bone is moved, all corresponding vertices are moved accordingly. Thus, you only need to store the transformations for a few bones instead of the positions of all vertices.
Typically, the bones are organised in a tree-like structure and the transformations are stored relative to their parent. So, when the parent bone moves, all child bones move accordingly (e.g. moving your arm also moves your hand). This makes the approach very flexible and the skeleton can also be used to implement other techniques like ragdoll physics or inverse kinematics. However, before rendering the relative transformations of all bones must be converted to a global transformation. This task is usually very CPU intensive as it involves a lot of interpolation and matrix multiplications. As javascript is not really known for its blazing performance, this is something I definitely wanted to avoid.
So instead of relative transformations, I store a 3x4 matrix for each bone that contains the global transformation for a specific keyframe. I lose some flexibility and some accuracy when blending between keyframes, but it simplifies the process a lot. I can store those matrices in a texture so that i don’t have to re-upload them every frame and linear texture filtering gives me virtually free blending between keyframes. You can see the results (and some hilarious outtakes) in the video below. However, I still need to implement animation blending and the possibility to add different animations. The former enables smooth transitions between different animations and the latter allows to combine animations (e.g. so that the character can punch while running). Arbitrary complex adding and blending situations can be completely done on the GPU by using the static keyframes to render the final bone transformations to a render target texture that is then used to deform the mesh.
During the rest of the week, I also worked on some other stuff: I wrote an “entity/component”-system, added (de-)serialization functionality and added support for textures and materials. Aside from the textures none of that is directly visible, but it makes my life a lot easier.