{"id":1847,"date":"2024-02-07T16:17:10","date_gmt":"2024-02-07T16:17:10","guid":{"rendered":"https:\/\/timallanwheeler.com\/blog\/?p=1847"},"modified":"2024-07-20T03:21:47","modified_gmt":"2024-07-20T03:21:47","slug":"posing-meshes-with-opengl","status":"publish","type":"post","link":"https:\/\/timallanwheeler.com\/blog\/2024\/02\/07\/posing-meshes-with-opengl\/","title":{"rendered":"Posing Meshes with OpenGL"},"content":{"rendered":"\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"554\" src=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/02\/2024-02-06_21-02-1024x554.png\" alt=\"\" class=\"wp-image-1933\" srcset=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/02\/2024-02-06_21-02-1024x554.png 1024w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/02\/2024-02-06_21-02-300x162.png 300w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/02\/2024-02-06_21-02-768x416.png 768w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/02\/2024-02-06_21-02.png 1284w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p>This month I&#8217;m continuing the work on the side scroller and moving from static meshes to meshes with animations. This is a fairly sizeable jump as it involves a whole host of new concepts, including bones and armatures, vertex painting, and interpolation between keyframes. For reference, the <a href=\"https:\/\/learnopengl.com\/Guest-Articles\/2020\/Skeletal-Animation\">Skeletal  Animation post on learnopengl.com<\/a> prints out to about 16 pages. That&#8217;s a lot to learn. Writing my own post about it helps me make sure I understand the fundamentals well enough to explain them to someone else.<\/p>\n\n\n\n<h1 class=\"wp-block-heading\">Moving Meshes<\/h1>\n\n\n\n<p>Our goal is to get our 3D meshes to move. This primarily means adding having the ability to make our player mesh move in prescribed ways, such as jumping, crouching, attacking, and climbing ladders. <\/p>\n\n\n\n<p>So far we&#8217;ve been working with meshes, i.e. vertex and face data that forms our player character. We want those vertices to move around. Our animations will specify how those vertices move around. That is, where the \\(m\\) mesh vertices are as a function of time:<\/p>\n\n\n\n<p>\\[\\vec{p}_i(t) \\text{ for } i \\text{ in } 1:m\\]\n\n\n\n<p>Unfortunately, a given mesh has a lot of vertices. The low-poly knight model I&#8217;m using has more than 1.5k of them, and its relatively tiny. Specifying where each individual vertex goes over time would be an excessive amount of work.<\/p>\n\n\n\n<p>Not only would specifying an animation on a per-vertex level be a lot of work, it would be very slow. One of the primary advantages of a graphics card is that we can ship the data over at setup time and not have to send it all over again:<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"237\" src=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/01\/2024-01-19_20-43-1024x237.png\" alt=\"\" class=\"wp-image-1851\" srcset=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/01\/2024-01-19_20-43-1024x237.png 1024w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/01\/2024-01-19_20-43-300x69.png 300w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/01\/2024-01-19_20-43-768x178.png 768w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/01\/2024-01-19_20-43.png 1244w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/div>\n\n\n<p><\/p>\n\n\n\n<p>With the rendering we&#8217;re doing now, we just update a few small uniforms every frame to change the player position. The mesh data itself is already on the GPU, stored as a vertex buffer object.<\/p>\n\n\n\n<p>So fully specifying the vertex positions over time is both extremely tedious and computationally expensive. What do we do instead?<\/p>\n\n\n\n<p>When a character moves, many of their vertices typically move together. For example, if my character punches, we expect all of the vertices in their clenched fist to together, their forearm arm to follow. Their upper arm follows that, maybe with a bend, etc. Vertices thus tend to be grouped, and we can leverage the grouping for increased efficiency.<\/p>\n\n\n\n<h1 class=\"wp-block-heading\">Bones<\/h1>\n\n\n\n<p>You know how artists sometimes have these wooden figures that they can bend and twist into poses? That&#8217;s the industry-standard way to do 3D animation.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"487\" height=\"474\" src=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/01\/2024-01-19_21-00.png\" alt=\"\" class=\"wp-image-1852\" style=\"width:309px;height:auto\" srcset=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/01\/2024-01-19_21-00.png 487w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/01\/2024-01-19_21-00-300x292.png 300w\" sizes=\"auto, (max-width: 487px) 100vw, 487px\" \/><\/figure>\n<\/div>\n\n\n<p><\/p>\n\n\n\n<p>We create an <em>armature<\/em>, which is a sort of rigid skeleton that we can use to pose the mesh. The armature is made up of rigid entities, <em>bones<\/em>, which can move around to produce the movement expected in our animation. There are far fewer bones than vertices. We then compute our vertex positions based on the bones positions &#8211; a vertex in the character&#8217;s fist is going to move with the fist bone.<\/p>\n\n\n\n<p>This approach solves our problems. The bones are much easier to pose and thus build animations out of, and we can continue to send the mesh and necessary bone information to the GPU once at startup, and just send the updated bone poses whenever we render the mesh.<\/p>\n\n\n\n<p>The bones in an armature have a hierarchical structure. Every bone has a single parent and any number of children, except the root bone, which has no parent.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"277\" height=\"427\" src=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/01\/2024-01-19_21-17.png\" alt=\"\" class=\"wp-image-1853\" style=\"width:203px;height:auto\" srcset=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/01\/2024-01-19_21-17.png 277w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/01\/2024-01-19_21-17-195x300.png 195w\" sizes=\"auto, (max-width: 277px) 100vw, 277px\" \/><\/figure>\n<\/div>\n\n\n<p><\/p>\n\n\n\n<p>Unlike my oh-so-fancy drawing, bones don&#8217;t actually take up space. They are actually just little reference frames. Each bone&#8217;s reference frame is given by a transformation \\(T\\) with respect to its parent. More specifically, \\(T\\) transforms a point in the bone&#8217;s frame to that of its parent.<\/p>\n\n\n\n<p>For example, we can use this transform to get the &#8220;location&#8221; of a bone. We can get the position of the root bone by transforming the origin of its frame to its parent &#8211; the model frame. This is given by \\(T_\\text{root} \\vec{o}\\), where \\(\\vec{o}\\) is the origin in <a href=\"https:\/\/en.wikipedia.org\/wiki\/Homogeneous_coordinates\">homogenous coordinates<\/a>:  \\([0,0,0,1]\\). Our transforms use 4&#215;4 matrices so that we can get translation, which is ubiquitous in 3D graphics. <\/p>\n\n\n\n<p>Similarly, the position of one of the root bone&#8217;s children with transform  \\(T_\\text{c}\\) can be obtained by first computing its position in the root bone&#8217;s frame, \\(T_\\text{c} \\vec{o}\\), and then convert that to the model frame, \\(T_\\text{root} T_\\text{c} \\vec{o}\\). <\/p>\n\n\n\n<p>The order of operation matters! It is super important. It is what gives us the property that moving a leaf bone only affects that one bone, but moving a bone higher up in the hierarchy affects all child bones. If you ever can&#8217;t remember which order it is, and you can&#8217;t just quickly test it out in code, try composing two transforms together. <\/p>\n\n\n\n<p>The origin of a bone 4 levels deep is:<\/p>\n\n\n\n<p>\\[T_\\text{root} T_\\text{c1} T_\\text{c2} T_\\text{c3} T_\\text{c4} \\vec{o}\\]\n\n\n\n<p>Let&#8217;s call the aggregated transform for bone \\(c4\\), this product of its ancestors, \\(\\mathbb{T}_{c4}\\).<\/p>\n\n\n\n<p>It is these transformations that we&#8217;ll be changing in order to produce animations. Before we get to that, we have to figure out how the vertices move with the bones.<\/p>\n\n\n\n<h1 class=\"wp-block-heading\">Transforming a Vertex<\/h1>\n\n\n\n<p>The vertices are all defined in our mesh. They have positions in model-space. <\/p>\n\n\n\n<p>Each bone has a position (and orientation) in model space, as we&#8217;ve already seen. <\/p>\n\n\n\n<p>Let&#8217;s consider what happens when we associate a vertex with a bone. That is, we &#8220;connect&#8221; the vertex to the bone such that, if the bone moves, the vertex moves with it. We expect the vertex to move along in the bone&#8217;s local frame:<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1015\" height=\"341\" src=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/01\/2024-01-20_19-57.png\" alt=\"\" class=\"wp-image-1882\" style=\"width:566px;height:auto\" srcset=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/01\/2024-01-20_19-57.png 1015w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/01\/2024-01-20_19-57-300x101.png 300w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/01\/2024-01-20_19-57-768x258.png 768w\" sizes=\"auto, (max-width: 1015px) 100vw, 1015px\" \/><\/figure>\n<\/div>\n\n\n<p><\/p>\n\n\n\n<p>Here the bone both translates to the right and up, and rotates counter-clockwise about 30 degrees. In the image, the vertex does the same.<\/p>\n\n\n\n<p>The image above has the bone starting off at the origin, with its axis aligned with the coordinate axes. This means the vertex is starting off in the bone&#8217;s reference frame. To get the vertex into the bone frame (it is originally defined in the model frame), we have to multiply it by \\(\\mathbb{T}^{-1}\\). Intuitively, if \\(\\mathbb{T} \\vec{p}\\) takes a point from bone space to model space then the inverse takes a point from model space to bone space.<\/p>\n\n\n\n<p>The bone moved. That means it has a new transform relative to its parent, \\(S\\). Plus, that parent bone may have moved, and so forth. We have to compute a new aggregate bone transform \\(\\mathbb{S}\\). The updated position of the vertex in model space is thus obtained by transforming it from its original model space into bone space with \\(\\mathbb{T}^{-1}\\) and then transforming it back to model space according to the new armature configuration with \\(\\mathbb{S}\\):<\/p>\n\n\n\n<p>\\[\\vec v&#8217; = \\mathbb{S} \\mathbb{T}^{-1} \\vec v\\]\n\n\n\n<p>We can visualize this as moving a vertex in the original mesh pose to the bone-relative space, and then transforming it back to model-space based on the new armature pose:<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"375\" src=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/01\/2024-01-20_21-06-1024x375.png\" alt=\"\" class=\"wp-image-1917\" style=\"width:734px;height:auto\" srcset=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/01\/2024-01-20_21-06-1024x375.png 1024w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/01\/2024-01-20_21-06-300x110.png 300w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/01\/2024-01-20_21-06-768x282.png 768w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/01\/2024-01-20_21-06.png 1053w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/div>\n\n\n<p><\/p>\n\n\n\n<p>This means we should store \\(\\mathbb{T}^{-1}\\) for each bone in the armature &#8211; we&#8217;re going to need it a lot. In any given frame we&#8217;ll just have to compute \\(\\mathbb{S}\\) by walking down the armature. We then compute \\(\\mathbb{S} \\mathbb{T}^{-1}\\) and pass that to our shader to properly pose the model.<\/p>\n\n\n\n<h1 class=\"wp-block-heading\">Bone Weights<\/h1>\n\n\n\n<p>The previous section let us move a vertex associated with a single bone. That works okay for very blocky models composed of rigid segments. For example, a stocky robot or simple car with spinning wheels. Most models are less rigid. Yes, if you punch someone you want the vertices in your fist to follow the hand bone, but as you extend your elbow the vertices near the joint will follow both the bone from the upper arm and the bone from the lower arm. We want a way to allow this to happen.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"916\" height=\"458\" src=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/01\/2024-01-20_20-28.png\" alt=\"\" class=\"wp-image-1912\" style=\"width:510px;height:auto\" srcset=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/01\/2024-01-20_20-28.png 916w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/01\/2024-01-20_20-28-300x150.png 300w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/01\/2024-01-20_20-28-768x384.png 768w\" sizes=\"auto, (max-width: 916px) 100vw, 916px\" \/><\/figure>\n<\/div>\n\n\n<p><\/p>\n\n\n\n<p>Instead of a vertex being associated with one bone, we allow a vertex to be associated with multiple bones. We have the final vertex combination be a mix of any number of other bones:<\/p>\n\n\n\n<p>\\[\\vec v&#8217; = \\sum_{i=1}^m w^{(i)} \\mathbb{S}_i \\mathbb{T}_i^{-1} \\vec v\\]\n\n\n\n<p>where the nonnegative weights \\(\\vec w\\) sum to one.<\/p>\n\n\n\n<p>In practice, most vertices are associated with one one or two bones. It is common to allow up to 4 bone associations, simply because that covers most needs and then we can use the 4-dimensional vector types supported by OpenGL.<\/p>\n\n\n\n<p>Data-wise, what this means is:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>In addition to the original mesh data, we also need to provide an armature, which is a hierarchy of bones and their relative transformations.<\/li>\n\n\n\n<li>We also need to associate each vertex with some set of bones. This is typically done via a <code>vec4<\/code> of weights and an <code>ivec4<\/code> of bone indices. We store these alongside the other vertex data.<\/li>\n\n\n\n<li>The vertex data is typically static and can be stored on the graphics card in the vertex array buffer.<\/li>\n\n\n\n<li>We compute the inverse bone transforms for the original armature pose on startup.<\/li>\n\n\n\n<li>When we want to render a model, we pose the bones, compute the current bone transforms \\(\\mathbb{S}\\), and then compute and sent \\(\\mathbb{S} \\mathbb{T}^{-1}\\) to the shader as a uniform.<\/li>\n<\/ul>\n\n\n\n<h1 class=\"wp-block-heading\">Coding it Up<\/h1>\n\n\n\n<p>Alright, enough talk. How do we get this implemented?<\/p>\n\n\n\n<p>We start by updating our definition for a vertex. In addition to the position, normal, and texture coordinates, we now also store the bone weights:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code lang=\"cpp\" class=\"language-cpp\">struct RiggedVertex {\n    glm::vec3 position;   \/\/ location\n    glm::vec3 normal;     \/\/ normal vector\n    glm::vec2 tex_coord;  \/\/ texture coordinate\n\n    \/\/ Up to 4 bones can contribute to a particular vertex\n    \/\/ Unused bones must have zero weight\n    glm::ivec4 bone_ids;\n    glm::vec4 bone_weights;\n};<\/code><\/pre>\n\n\n\n<p>This means we have to update how we set up the vertex buffer object:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code lang=\"cpp\" class=\"language-cpp\">glGenVertexArrays(1, &amp;(mesh-&gt;vao));\nglGenBuffers(1, &amp;(mesh-&gt;vbo));\n\nglBindVertexArray(mesh-&gt;vao);\nglBindBuffer(GL_ARRAY_BUFFER, mesh-&gt;vbo);\nglBufferData(GL_ARRAY_BUFFER, mesh-&gt;vertices.size() * sizeof(RiggedVertex),\n                              &amp;mesh-&gt;vertices[0], GL_STATIC_DRAW);\n\n\/\/ vertex positions\nglEnableVertexAttribArray(0);\nglVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(RiggedVertex),\n                      (void*)offsetof(RiggedVertex, position));\n\/\/ vertex normals\nglEnableVertexAttribArray(1);\nglVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(RiggedVertex),\n                      (void*)offsetof(RiggedVertex, normal));\n\/\/ texture coordinates\nglEnableVertexAttribArray(2);\nglVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, sizeof(RiggedVertex),\n                      (void*)offsetof(RiggedVertex, tex_coord));\n\/\/ bone ids (max 4)\nglEnableVertexAttribArray(3);\nglVertexAttribIPointer(3, 4, GL_INT, sizeof(RiggedVertex),\n                       (void*)offsetof(RiggedVertex, bone_ids));\n\/\/ bone weights (max 4)\nglEnableVertexAttribArray(4);\nglVertexAttribPointer(4, 4, GL_FLOAT, GL_FALSE, sizeof(RiggedVertex),\n                       (void*)offsetof(RiggedVertex, bone_weights));<\/code><\/pre>\n\n\n\n<p>Note the use of <code>glVertexAttribIPointer<\/code> instead of <code>glEnableVertexAttribArray<\/code> for the bone ids. That problem took me many hours to figure out. It turns out that <code>glVertexAttribPointer<\/code> accepts integers but has the card interpret them as floating point, which messes everything up it you indent to actually use integers on the shader side.<\/p>\n\n\n\n<p>As far as our shader goes, we are only changing where the vertices are located, not how they are colored. As such, we only need to update our vertex shader (not the fragment shader). The new shader is:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code lang=\"cpp\" class=\"language-cpp\">#version 330 core\n\nlayout (location = 0) in vec3 aPos;\nlayout (location = 1) in vec3 aNormal;\nlayout (location = 2) in vec2 aTexCoord;\nlayout (location = 3) in ivec4 aBoneIds; \nlayout (location = 4) in vec4 aBoneWeights;\n\nout vec3 pos_world;\nout vec3 normal_world;\nout vec2 tex_coords;\n\nconst int MAX_BONES = 100;\nconst int MAX_BONE_INFLUENCE = 4;\n\nuniform mat4 model; \/\/ transform from model to world space\nuniform mat4 view; \/\/ transform from world to view space\nuniform mat4 projection; \/\/ transform from view space to clip space\nuniform mat4 bone_transforms[MAX_BONES]; \/\/ S*Tinv for each bone\n\nvoid main()\n{\n    \/\/ Accumulate the position over the bones, in model space\n    vec4 pos_model = vec4(0.0f);\n    vec3 normal_model = vec3(0.0f);\n    for(int i = 0 ; i &lt; MAX_BONE_INFLUENCE; i++)\n    {\n        j = aBoneIds[i];\n        w = aBoneWeights[j]\n        STinv = bone_transforms[j];\n        pos_model += w*(STinv*vec4(aPos,1.0f));\n        normal_model += w*(mat3(STinv)*aNormal);\n    }\n\n    pos_world = vec3(model * pos_model);\n    normal_world = mat3(model) * normal_model;\n    gl_Position = projection * view * model * pos_model;\n    tex_coords = vec2(aTexCoord.x, 1.0 - aTexCoord.y);\n}<\/code><\/pre>\n\n\n\n<p>Note that the normal vector doesn&#8217;t get translated, so we only use <code>mat3<\/code>. The vertice&#8217;s texture coordinate is not affected.<\/p>\n\n\n\n<h1 class=\"wp-block-heading\">Testing it Out<\/h1>\n\n\n\n<p>Let&#8217;s start with something simple, a 2-triangle, 2-bone system with 5 vertices:<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"942\" height=\"335\" src=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/01\/2024-01-20_22-00.png\" alt=\"\" class=\"wp-image-1925\" style=\"width:669px;height:auto\" srcset=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/01\/2024-01-20_22-00.png 942w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/01\/2024-01-20_22-00-300x107.png 300w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/01\/2024-01-20_22-00-768x273.png 768w\" sizes=\"auto, (max-width: 942px) 100vw, 942px\" \/><\/figure>\n<\/div>\n\n\n<p><\/p>\n\n\n\n<p>The root bone is at the origin (via the identity transform) and the child bone is at [2,1,0]. The vertices of the left triangle are all associated only with the root bone and the rightmost two vertices are only associated with the child bone.<\/p>\n\n\n\n<p>If we slap on a sin transform for the child bone, we wave the right triangle:<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"320\" height=\"180\" src=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/01\/wiggle_child_arm.gif\" alt=\"\" class=\"wp-image-1927\"\/><\/figure>\n<\/div>\n\n\n<p><\/p>\n\n\n\n<p>If we instead slap a sin transform on the root bone, we wave both triangles as a rigid unit:<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"320\" height=\"180\" src=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/01\/wiggle_root.gif\" alt=\"\" class=\"wp-image-1929\"\/><\/figure>\n<\/div>\n\n\n<p><\/p>\n\n\n\n<p>We can apply a sin to both triangles, in which case the motions compose like they do if you bend both your upper and lower arms:<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"320\" height=\"180\" src=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/01\/wiggle_both.gif\" alt=\"\" class=\"wp-image-1930\"\/><\/figure>\n<\/div>\n\n\n<p><\/p>\n\n\n\n<p>We can build a longer segmented arm and move each segment a bit differently:<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"320\" height=\"180\" src=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/01\/wiggle_long.gif\" alt=\"\" class=\"wp-image-1931\"\/><\/figure>\n<\/div>\n\n\n<p><\/p>\n\n\n\n<h1 class=\"wp-block-heading\">Posing a Model<\/h1>\n\n\n\n<p>Now that we&#8217;ve built up some confidence in our math, we can start posing our model (The low-poly knight by Davmid, <a href=\"https:\/\/skfb.ly\/6Y9Mq\">available here on Sketchfab<\/a>) The ideas are exactly the same &#8211; we load the model, define some bone transforms, and then use those bone transforms to render its location.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"320\" height=\"180\" src=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/02\/posing_1.gif\" alt=\"\" class=\"wp-image-1934\"\/><\/figure>\n<\/div>\n\n\n<p><\/p>\n\n\n\n<p>I created an animation editing mode that lets us do this posing. As before, the UI stuff is all done with <a href=\"https:\/\/github.com\/ocornut\/imgui\">Dear ImGUI<\/a>. It lets you select your model, add animations, add frames to those animations, and adjust the bone transforms:<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"930\" height=\"652\" src=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/02\/2024-02-07_07-23.png\" alt=\"\" class=\"wp-image-1935\" srcset=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/02\/2024-02-07_07-23.png 930w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/02\/2024-02-07_07-23-300x210.png 300w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/02\/2024-02-07_07-23-768x538.png 768w\" sizes=\"auto, (max-width: 930px) 100vw, 930px\" \/><\/figure>\n<\/div>\n\n\n<p><\/p>\n\n\n\n<p>A single frame represents an overall model pose by defining the transformations for each bone:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code lang=\"cpp\" class=\"language-cpp\">struct Keyframe {\n    f32 duration;\n    std::vector&lt;glm::vec3&gt; positions;\n    std::vector&lt;glm::quat&gt; orientations;\n    std::vector&lt;glm::vec3&gt; scales;\n};<\/code><\/pre>\n\n\n\n<p>The position, orientation, and scale are stored separately, mainly because it is easier to reason about them when they are broken out like this. Later, when we do animation, we&#8217;ll have to interpolate between frames, and it is also easier to reason about interpolation between these components rather than between the overall transforms. A bone&#8217;s local transform \\(S\\) is simply the combination of its position, orientation, and scale. <\/p>\n\n\n\n<p>We calculate our fine bone transforms \\(\\mathbb{S} \\mathbb{T}^{-1}\\) for our frame and then pass that off to the shader for rendering:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code lang=\"cpp\" class=\"language-cpp\">size_t n_bones = mesh-&gt;bone_transforms_final.size();\nfor (size_t i_bone = 0; i_bone &lt; n_bones; i_bone++) {\n    glm::vec3 pos = frame.positions[i_bone];\n    glm::quat ori = frame.orientations[i_bone];\n    glm::vec3 scale = frame.scales[i_bone];\n\n    \/\/ Compute the new current transform\n    glm::mat4 Sbone = glm::translate(glm::mat4(1.0f), pos);\n    Sbone = Sbone * glm::mat4_cast(ori);\n    Sbone = glm::scale(Sbone, scale);\n\n    if (mesh-&gt;bone_parents[i_bone] &gt;= 0) {\n        const glm::mat4&amp; Sparent =\n            mesh-&gt;bone_transforms_curr[mesh-&gt;bone_parents[i_bone]];\n        Sbone = Sparent * Sbone;\n    }\n\n    mesh-&gt;bone_transforms_curr[i_bone] = Sbone;\n\n    glm::mat4 Tinv = mesh-&gt;bone_transforms_orig[i_bone];\n    mesh-&gt;bone_transforms_final[i_bone] = Sbone * Tinv;\n}<\/code><\/pre>\n\n\n\n<p><\/p>\n\n\n\n<h1 class=\"wp-block-heading\">Conclusion<\/h1>\n\n\n\n<p>When you get down to it, the math behind rigged meshes isn&#8217;t all too bad. The main idea is that you have bones with transforms relative to their parent bone, and that vertices are associated with a combination of bones. You have to implement rigged meshes that store your bones and vertex weights, need a shader for rigged meshes, and a way to represent your model pose.<\/p>\n\n\n\n<p>This main stuff doesn&#8217;t take too long to implement in theory, but this post took me way longer to develop than normal because I wrote a bunch of other supporting code (plus banged my head against the wall when things didn&#8217;t initially work &#8211; its hard to debug a garbled mess).<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"555\" height=\"412\" src=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/02\/2024-01-18_22-09.png\" alt=\"\" class=\"wp-image-1936\" style=\"width:311px;height:auto\" srcset=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/02\/2024-01-18_22-09.png 555w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/02\/2024-01-18_22-09-300x223.png 300w\" sizes=\"auto, (max-width: 555px) 100vw, 555px\" \/><\/figure>\n<\/div>\n\n\n<p><\/p>\n\n\n\n<p>Adding my own animation editor meant I probably wanted to save those animations to disk. That meant I probably wanted to save my meshes to disk too, which meant saving the models, textures, and materials. Getting all that set up took time, and its own fair share of debugging. (I&#8217;m using a WAD-like data structure like I did for TOOM).<\/p>\n\n\n\n<p>The animation editor also took some work. Dear ImGUI makes writing UI code a lot easier, but adding all of the widgets and restructuring a lot of my code to be able to have separate application modes for things like gameplay and the animation editor simply took time. The largest time suck (and hit to motivation) for a larger project like this is knowing you have to refactor a large chunk of it to make forward progress. Its great to reach the other end though and to have something that makes sense.<\/p>\n\n\n\n<p>We&#8217;re all set up with posable models now, so the next step is to render animations. Once that&#8217;s possible we&#8217;ll be able to tie those animations to the gameplay &#8212; have the model walk, jump, and stab accordingly. We&#8217;ll then be able to attach other meshes to our models, i.e. a sword to its hand, and tie hitspheres and hurtspheres to the animation, which will form the core of our combat system.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>This month I&#8217;m continuing the work on the side scroller and moving from static meshes to meshes with animations. This is a fairly sizeable jump as it involves a whole host of new concepts, including bones and armatures, vertex painting, and interpolation between keyframes. For reference, the Skeletal Animation post on learnopengl.com prints out to [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[10],"class_list":["post-1847","post","type-post","status-publish","format-standard","hentry","category-uncategorized","tag-sidescroller"],"_links":{"self":[{"href":"https:\/\/timallanwheeler.com\/blog\/wp-json\/wp\/v2\/posts\/1847","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/timallanwheeler.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/timallanwheeler.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/timallanwheeler.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/timallanwheeler.com\/blog\/wp-json\/wp\/v2\/comments?post=1847"}],"version-history":[{"count":78,"href":"https:\/\/timallanwheeler.com\/blog\/wp-json\/wp\/v2\/posts\/1847\/revisions"}],"predecessor-version":[{"id":2438,"href":"https:\/\/timallanwheeler.com\/blog\/wp-json\/wp\/v2\/posts\/1847\/revisions\/2438"}],"wp:attachment":[{"href":"https:\/\/timallanwheeler.com\/blog\/wp-json\/wp\/v2\/media?parent=1847"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/timallanwheeler.com\/blog\/wp-json\/wp\/v2\/categories?post=1847"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/timallanwheeler.com\/blog\/wp-json\/wp\/v2\/tags?post=1847"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}