{"id":1745,"date":"2024-01-01T17:19:51","date_gmt":"2024-01-01T17:19:51","guid":{"rendered":"https:\/\/timallanwheeler.com\/blog\/?p=1745"},"modified":"2024-07-20T03:22:48","modified_gmt":"2024-07-20T03:22:48","slug":"rendering-with-opengl","status":"publish","type":"post","link":"https:\/\/timallanwheeler.com\/blog\/2024\/01\/01\/rendering-with-opengl\/","title":{"rendered":"Rendering with OpenGL"},"content":{"rendered":"\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"960\" height=\"551\" src=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/2023-12-29_08-09.png\" alt=\"\" class=\"wp-image-1816\" srcset=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/2023-12-29_08-09.png 960w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/2023-12-29_08-09-300x172.png 300w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/2023-12-29_08-09-768x441.png 768w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" \/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p>In the previous post I introduced a new side scroller project, and laid out some of the things I wanted to accomplish with it. One of those points was to render with the graphics card. I had never worked directly with 3D assets and graphics cards before. Unless some high-level API was doing rendering for me under the hood, I wasn&#8217;t harnessing the power of modern graphics pipelines.<\/p>\n\n\n\n<p>For <a href=\"https:\/\/timallanwheeler.com\/blog\/2023\/08\/31\/toom-with-non-euclidean-geometry\/\">TOOM<\/a> I was not using the graphics card. In fact, the whole point of TOOM was to write a <em>software renderer<\/em>, which only uses the CPU for graphics. That was a great learning experience, but there is a reason that graphics cards exist, and  that reason pretty much is that software renderers spend a lot of time rendering pixels when it&#8217;d be a lot nicer to hand that work off to something that can churn through it more efficiently.<\/p>\n\n\n\n<p>Before starting, I thought that learning to render with the graphics card would more-or-less boil down to calling <code>render(mesh)<\/code>. I assumed the bulk of the benefit would be faster calls. What I did not anticipate was learning how rich render pipelines can be, with customized shaders and data formats opening up a whole world of neat possibilities. I love it when building a basic understanding of something suddenly causes all sorts of other things to click. For example, CUDA used to be an abstract, scary performance thing that only super smart programmers would do. Now that I know a few things, I can conceptualize how it works. It is a pretty nice feeling.<\/p>\n\n\n\n<h1 class=\"wp-block-heading\">Learning OpenGL<\/h1>\n\n\n\n<p>I learned OpenGL the way many people do. I went to <a href=\"http:\/\/learnopengl.com\">learnopengl.com<\/a> and followed their wonderful tutorials. It took some doing, and it was somewhat onerous, but it was well worth it. The chapters are well laid out and gradually build up concepts.<\/p>\n\n\n\n<p>This process had me install a few additional libraries. First off, I learned that one does not commonly directly use OpenGL. It involves a bunch of function pointers corresponding to various OpenGL versions, and getting the right function pointers from the graphics driver can be rather tedious. Thus, one uses a service like <a href=\"https:\/\/glad.dav1d.de\/\">GLAD<\/a> to generate the necessary code for you. <\/p>\n\n\n\n<p>As recommended by the tutorials, I am using <a href=\"https:\/\/github.com\/nothings\/stb\/blob\/master\/stb_image.h\">stb_image<\/a> by the legendary Sean Barrett for loading images for textures, and <a href=\"https:\/\/github.com\/assimp\/assimp\">assimp<\/a> for loading 3D asset files. I still find it hilarious that they can get away with basically calling that last one Butt Devil. Lastly, I&#8217;m using the <a href=\"https:\/\/github.com\/g-truc\/glm\">OpenGL mathematics library<\/a> (glm) for math types compatible with OpenGL. I am still using SDL2 but switched to the imgui_impl_opengl3 backend. These were all pretty easy to use and install &#8212; the tutorials go a long way and the rest is pretty self-explanatory.<\/p>\n\n\n\n<p>I did run into an issue with OpenGL where I didn&#8217;t have a graphics driver new enough to support the version that learnopengl was using (3.3). I ended up being able to resolve it by updating my OS to Ubuntu 22.04. I had not upgraded in about 4 years on account of relying on this machine to compile <a href=\"https:\/\/algorithmsbook.com\/optimization\">the<\/a> <a href=\"https:\/\/algorithmsbook.com\/\">textbooks<\/a>, but given that they&#8217;re out I felt I didn&#8217;t have to be so paranoid anymore.<\/p>\n\n\n\n<p>In the end, updating to Jammy Jellyfish was pretty easy, except <em>it broke all of my screen capture software<\/em>. I previously used <a href=\"https:\/\/shutter-project.org\/\">Shutter<\/a> and <a href=\"https:\/\/github.com\/phw\/peek\">Peek<\/a> to take screenshots and capture GIFs. Apparently that all went away with <a href=\"https:\/\/wiki.ubuntu.com\/Wayland\">Wayland<\/a>, and now I need to give <a href=\"https:\/\/flameshot.org\/docs\/installation\/installation-linux\/\">Flameshot<\/a> permission every time I take a screenshot and I didn&#8217;t have any solution for GIF recordings, even if login with Xorg. <\/p>\n\n\n\n<p>In writing this post I buckled down and tried Kooha again, which tries to record but then hangs forever on save (like Peek). I ended up installing <a href=\"https:\/\/obsproject.com\/\">OBS Studio<\/a>, recording a .mp4, and then converting that to a GIF in the terminal via ffmpeg:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code class=\"\">ffmpeg -ss 1 -i input.mp4 \\\n  -vf \"fps=30,scale=320:-1:flags=lanczos,split[s0][s1];[s0]palettegen[p];[s1][p]paletteuse\" \\\n   -loop 0 2d_to_3d.gif<\/code><\/pre>\n\n\n\n<p>This was only possible because of <a href=\"https:\/\/superuser.com\/questions\/556029\/how-do-i-convert-a-video-to-gif-using-ffmpeg-with-reasonable-quality\">this excellent stack overflow answer<\/a> and this <a href=\"https:\/\/schnerring.net\/blog\/use-obs-and-ffmpeg-to-create-gif-like-screencasts\/\">OBS blog post<\/a>. The resulting quality is actually a lot better than Peek, and I&#8217;m finding I have to adjust some settings to make the file size reasonable. I suppose its nice to be in control, but I really liked Peek&#8217;s convenience.<\/p>\n\n\n\n<h1 class=\"wp-block-heading\">Moving to 3D<\/h1>\n\n\n\n<p>The side scroller logic was all done in a 2D space, but now rendering can happen in 3D. The side scroller game logic will largely remain 2D &#8211; the player entities will continue to move around 2D world geometry:<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"545\" src=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/image-1024x545.png\" alt=\"\" class=\"wp-image-1751\" style=\"width:574px;height:auto\" srcset=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/image-1024x545.png 1024w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/image-300x160.png 300w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/image-768x409.png 768w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/image.png 1129w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/div>\n\n\n<p><\/p>\n\n\n\n<p>but that 2D world geometry all exists at depth \\(z=0\\) with respect to a perspective-enabled camera:<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"320\" height=\"180\" src=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/2d_to_3d-2.gif\" alt=\"\" class=\"wp-image-1757\"\/><\/figure>\n<\/div>\n\n\n<p><\/p>\n\n\n\n<p>Conceptually, this is actually somewhat more complicated than simply calling DrawLine for each edge, which is what I was doing before via SDL2:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code class=\"\">for edge in mesh\n    DrawLine(edge)<\/code><\/pre>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"507\" height=\"157\" src=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/01\/2023-12-31_16-22.png\" alt=\"\" class=\"wp-image-1830\" srcset=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/01\/2023-12-31_16-22.png 507w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/01\/2023-12-31_16-22-300x93.png 300w\" sizes=\"auto, (max-width: 507px) 100vw, 507px\" \/><\/figure>\n<\/div>\n\n\n<p><\/p>\n\n\n\n<p>Rendering in 3D requires:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>setting up a vertex buffer and filling it with my mesh data, including color<\/li>\n\n\n\n<li>telling OpenGL how to interpret said data<\/li>\n\n\n\n<li>using a vertex shader that takes the position and color attributes and transforms them via the camera transformations into screen space<\/li>\n\n\n\n<li>using a dead-simple fragment shader that just passes on the received vertex colors<\/li>\n\n\n\n<li>calling <code>glDrawArrays<\/code> with GL_LINES to execute all of this machinery <\/li>\n<\/ul>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"723\" height=\"328\" src=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/image-1.png\" alt=\"\" class=\"wp-image-1772\" style=\"width:569px;height:auto\" srcset=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/image-1.png 723w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/image-1-300x136.png 300w\" sizes=\"auto, (max-width: 723px) 100vw, 723px\" \/><\/figure>\n<\/div>\n\n\n<p><\/p>\n\n\n\n<p>This is not hard to do per se, but it is more complicated. One nice consequence is that there is only a single draw call for all this mesh data rather than a bunch of independent line draw calls. The GPU then crunches it all in parallel:<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"685\" height=\"277\" src=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/01\/2023-12-31_16-24.png\" alt=\"\" class=\"wp-image-1835\" srcset=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/01\/2023-12-31_16-24.png 685w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2024\/01\/2023-12-31_16-24-300x121.png 300w\" sizes=\"auto, (max-width: 685px) 100vw, 685px\" \/><\/figure>\n<\/div>\n\n\n<p><\/p>\n\n\n\n<p>The coordinate transforms for this are basically what learnopengl recommends:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>an identity model matrix since the 2D mesh data is already in the world frame<\/li>\n\n\n\n<li>a view matrix obtained via <code>glm::lookat<\/code> based on a camera position and orientation<\/li>\n\n\n\n<li>a projection matrix  obtained via <code>glm::perspective<\/code> with a 45-degree viewing angle, our window&#8217;s aspect ratio, and \\(z\\) bounds between 0.1 and 100:<\/li>\n<\/ul>\n\n\n\n<pre class=\"wp-block-code\"><code lang=\"cpp\" class=\"language-cpp\">glm::mat4 projection = glm::perspective(\n     glm::radians(45.0f),\n     screen_size_x \/ screen_size_y,\n     0.1f, 100.0f);<\/code><\/pre>\n\n\n\n<p>This setup worked great to get to parity with the debug line drawing I was doing before, but I wanted to be using OpenGL for some real mesh rendering. I ended up defining two shader programs in addition to the debug-line-drawing one: one for textured meshes and one for material-based meshes. I am now able to render 3D meshes!<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"969\" height=\"592\" src=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/2023-12-12_21-28.png\" alt=\"\" class=\"wp-image-1770\" srcset=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/2023-12-12_21-28.png 969w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/2023-12-12_21-28-300x183.png 300w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/2023-12-12_21-28-768x469.png 768w\" sizes=\"auto, (max-width: 969px) 100vw, 969px\" \/><\/figure>\n<\/div>\n\n\n<p><\/p>\n\n\n\n<p>This low-poly knight was created by Davmid, and is <a href=\"https:\/\/skfb.ly\/6Y9Mq\">available here on Sketchfab<\/a>.  The rest of the assets I actually made myself in Blender and exported to FBX. I had never used Blender before. Yeah, its been quite a month of learning new tools.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"320\" height=\"180\" src=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/2023-12-27_21-28-55-1.gif\" alt=\"\" class=\"wp-image-1810\" style=\"width:610px;height:auto\"\/><\/figure>\n<\/div>\n\n\n<p><\/p>\n\n\n\n<p>The recording above also includes a flashlight effect. This was of course based on <a href=\"https:\/\/learnopengl.com\/Lighting\/Light-casters\">the learnopengl tutorials<\/a> too.<\/p>\n\n\n\n<p>Rendering with shaders adds a ton of additional options for effects, but also comes with a lot of decisions and tradeoffs around how data is stored. I would like to simplify my approach and end up with as few shaders as possible. For now its easiest for me to make models in Blender with basic materials, but maybe I can move the material definitions to a super-low-res texture (i.e., one pixel per material) and then only use textured models.<\/p>\n\n\n\n<h1 class=\"wp-block-heading\">Architecture<\/h1>\n\n\n\n<p>It is perhaps worth taking a step back to talk about how this stuff is architected. That is, how the data structures are arranged. As projects get bigger, it is easier to get lost in the confusion, and a little organization goes a long way.<\/p>\n\n\n\n<p>First off, the 3D stuff is mostly cosmetic. It exists to look good. It is thus largely separate from the underlying game logic:<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"397\" height=\"140\" src=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/image-2.png\" alt=\"\" class=\"wp-image-1774\" srcset=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/image-2.png 397w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/image-2-300x106.png 300w\" sizes=\"auto, (max-width: 397px) 100vw, 397px\" \/><\/figure>\n<\/div>\n\n\n<p><\/p>\n\n\n\n<p>I created a <code>MeshManager<\/code> class that is responsible for storing my mesh-related data. It currently keeps track of four things:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Materials &#8211; color properties for untextured meshes <\/li>\n\n\n\n<li>Textures &#8211; diffuse and specular maps for textured meshes <\/li>\n\n\n\n<li>Meshes &#8211; the 3D data associated with a single material or texture<\/li>\n\n\n\n<li>Models &#8211; a named 3D asset that consists of 1 or more meshes<\/li>\n<\/ol>\n\n\n\n<p>I think some structs help get the idea across:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code lang=\"cpp\" class=\"language-cpp\">struct Vertex {\n    glm::vec3 position;   \/\/ location\n    glm::vec3 normal;     \/\/ normal vector\n    glm::vec2 tex_coord;  \/\/ texture coordinate\n};<\/code><\/pre>\n\n\n\n<pre class=\"wp-block-code\"><code lang=\"cpp\" class=\"language-cpp\">struct Material {\n    std::string name;\n    glm::vec3 diffuse;   \/\/ diffuse color\n    glm::vec3 specular;  \/\/ specular color\n    f32 shininess;       \/\/ The exponent in the Phong specular equation\n};<\/code><\/pre>\n\n\n\n<pre class=\"wp-block-code\"><code lang=\"cpp\" class=\"language-cpp\">struct Texture {\n    GLuint id;             \/\/ The OpenGL texture id\n    std::string filepath;  \/\/ This is essentially its id\n};<\/code><\/pre>\n\n\n\n<pre class=\"wp-block-code\"><code lang=\"cpp\" class=\"language-cpp\">struct Mesh {\n    std::string name;  \/\/ The mesh name\n\n    std::vector&lt;Vertex&gt; vertices;\n\n    \/\/ Indices into `vertices` that form our mesh triangles\n    std::vector&lt;u32&gt; indices;\n\n    \/\/ An untextured mesh has a single material, typemax if none.\n    \/\/ If the original mesh has multiple materials, \n    \/\/ the assimp import process splits them.\n    u16 material_id;\n\n    \/\/ A textured mesh has both a diffuse and a specular texture\n    \/\/ (typemax if none)\n    u16 diffuse_texture_id;\n    u16 specular_texture_id;\n\n    GLuint vao;  \/\/ The OpenGL vertex array object id\n    GLuint vbo;  \/\/ The OpenGL vertex buffer id associated with `vertices`\n    GLuint ebo;  \/\/ The element array buffer associated with `indices`\n};<\/code><\/pre>\n\n\n\n<pre class=\"wp-block-code\"><code lang=\"cpp\" class=\"language-cpp\">struct Model {\n    std::string filepath;       \/\/ The filepath for the loaded model\n    std::string name;           \/\/ The name of the loaded model\n    std::vector&lt;u16&gt; mesh_ids;  \/\/ All mesh ids\n};<\/code><\/pre>\n\n\n\n<p>Visually, this amounts to:<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"618\" height=\"292\" src=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/image-3.png\" alt=\"\" class=\"wp-image-1779\" srcset=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/image-3.png 618w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/image-3-300x142.png 300w\" sizes=\"auto, (max-width: 618px) 100vw, 618px\" \/><\/figure>\n<\/div>\n\n\n<p><\/p>\n\n\n\n<p>All of this data can be loaded from the FBX files via assimp. In the future, I&#8217;ll also be writing this data out and back in via my own game data format.<\/p>\n\n\n\n<p>My game objects are currently somewhat of a mess. I have an EntityManager that maintains a set of entities:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code lang=\"cpp\" class=\"language-cpp\">struct Entity {\n    u32 uid;  \/\/ Unique entity id\n    u32 flags;\n\n    common::Vec2f pos;               \/\/ Position in gamespace\n\n    \/\/ A dual quarter edge for the face containing this position\n    core::QuarterEdgeIndex qe_dual;\n\n    u16 movable_id;       \/\/ The movable id, or 0xFFFF otherwise\n    u16 collider_id;      \/\/ The collider id, or 0xFFFF otherwise\n};<\/code><\/pre>\n\n\n\n<p>Entities have unique ids and contain ids for the various resources they might be associated with. For now this includes a movable id and a collider id. I keep those in their own managers as well. <\/p>\n\n\n\n<p>A movable provides information around how an entity is moving. Right now it is just just a velocity vector and some information about what the entity is standing on.<\/p>\n\n\n\n<p>A collider is a convex polygon that represents the entity&#8217;s shape with respect to collision with the 2D game world:<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"607\" height=\"396\" src=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/2023-12-27_21-19.png\" alt=\"\" class=\"wp-image-1807\" style=\"width:491px;height:auto\" srcset=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/2023-12-27_21-19.png 607w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/2023-12-27_21-19-300x196.png 300w\" sizes=\"auto, (max-width: 607px) 100vw, 607px\" \/><\/figure>\n<\/div>\n\n\n<p><\/p>\n\n\n\n<p>The player is the only entity that currently does anything, and there is only one collider size from which the Delaunay mesh is built. I&#8217;ll be cleaning up the entity system in the future, but you can see where its headed &#8211; it is a basic entity component system.<\/p>\n\n\n\n<h1 class=\"wp-block-heading\">Editing Assets<\/h1>\n\n\n\n<p>I want to use these meshes to decorate my game levels. While I could theoretically create a level mesh in Blender, I would much prefer to be able to edit the levels in-game using smaller mesh components.<\/p>\n\n\n\n<p>I created a very basic version of this using ImGUI wherein one can create and edit a list of <em>set pieces:<\/em><\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"491\" src=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/2023-12-28_22-04-1024x491.png\" alt=\"\" class=\"wp-image-1812\" srcset=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/2023-12-28_22-04-1024x491.png 1024w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/2023-12-28_22-04-300x144.png 300w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/2023-12-28_22-04-768x368.png 768w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/2023-12-28_22-04.png 1275w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p>This is leaps and bounds better than editing the level using a text file, or worse, hard-coding it. At the same time, it is pretty basic.  I don&#8217;t highlight the selected set piece, I don&#8217;t have mouse events hooked up for click and drag, and my camera controls aren&#8217;t amazing. I do have some camera controls though! And I can set the camera to orthographic, which helps a lot:<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"486\" src=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/2023-12-28_22-09-1024x486.png\" alt=\"\" class=\"wp-image-1813\" srcset=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/2023-12-28_22-09-1024x486.png 1024w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/2023-12-28_22-09-300x142.png 300w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/2023-12-28_22-09-768x364.png 768w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/2023-12-28_22-09.png 1271w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p>The inclusion of this set piece editor meant that I needed a way to save and load game data. (You can see the import and export buttons on the top-left). I went with the same WAD-file-like approach that I used for TOOM, which is a binary finally consisting of a basic header, a list of binary blobs, followed by a table of contents:<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"422\" height=\"398\" src=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/2023-12-29_07-48.png\" alt=\"\" class=\"wp-image-1814\" srcset=\"https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/2023-12-29_07-48.png 422w, https:\/\/timallanwheeler.com\/blog\/wp-content\/uploads\/2023\/12\/2023-12-29_07-48-300x283.png 300w\" sizes=\"auto, (max-width: 422px) 100vw, 422px\" \/><\/figure>\n<\/div>\n\n\n<p>Each blob consists of a name and a bunch of data. It is up to me to define how to serialize or deserialize the data associated with a particular blob type. Many of them just end up being a u32 count followed by a list of structs.<\/p>\n\n\n\n<p>One nice result of using a WAD-like binary is that I don&#8217;t always have to copy the data out to another data structure. With TOOM, I was copying things out in the C++ editor but was referencing the data as-is directly from the binary in the C code that executed the game. Later down the road I&#8217;ll probably do something similar with a baked version of the assets that the game can refer to.<\/p>\n\n\n\n<p>There are some tradeoffs. Binary files are basically impossible to edit manually. Adding new blob types doesn&#8217;t invalidate old save files, but changing how a blob type is serialized or deserialized does. I could add versions, but as a solo developer I haven&#8217;t found the need for it yet. I usually update the export method first, run my program and load using the old import method, export with the new method, and then update my import method. This is a little cumbersome but works. <\/p>\n\n\n\n<p>With TOOM I would keep loading and overwriting a single binary file. This was fine, but I was worried that if I ever accidentally corrupted it, I would have a hard time recreating all of my assets. With this project I decided to export new files every time. I have a little export directory and append the timestamp to the filename, ex: sidescroller_2023_12_22_21_28_55.bin. I&#8217;m currently loading the most recent one every time, but I still have older ones available if I ever bork things.<\/p>\n\n\n\n<h1 class=\"wp-block-heading\">Conclusion<\/h1>\n\n\n\n<p>This post was a little less targeted than normal. I usually talk about how a specific thing works. In this case, I didn&#8217;t want to re-hash how OpenGL works, and instead covered a somewhat broad range of things I learned and did while adding meshes to my game project. A lot of things in gamedev are connected, and by doing it I&#8217;m learning what the implications of various decisions are and oftentimes, why things are as they are.<\/p>\n\n\n\n<p>There are a lot of concepts to learn and tools to use.  The last month involved:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>OpenGL<\/li>\n\n\n\n<li>GLAD<\/li>\n\n\n\n<li>assimp<\/li>\n\n\n\n<li>stb_image<\/li>\n\n\n\n<li>Blender<\/li>\n\n\n\n<li>OBS Studio<\/li>\n\n\n\n<li>textures<\/li>\n\n\n\n<li>Phong lighting<\/li>\n\n\n\n<li>shaders<\/li>\n\n\n\n<li>&#8230;<\/li>\n<\/ul>\n\n\n\n<p>I am often struck by how something foreign and arcane like using a GPU seems mysterious and hard to do from the outset, but once you dive in you can pull back the curtains and figure it out. Sure, I still have a ton to learn, but the shape of the thing is there, and I can already do things with it.<\/p>\n\n\n\n<p>Happy coding, folks.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In the previous post I introduced a new side scroller project, and laid out some of the things I wanted to accomplish with it. One of those points was to render with the graphics card. I had never worked directly with 3D assets and graphics cards before. Unless some high-level API was doing rendering for [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[10],"class_list":["post-1745","post","type-post","status-publish","format-standard","hentry","category-uncategorized","tag-sidescroller"],"_links":{"self":[{"href":"https:\/\/timallanwheeler.com\/blog\/wp-json\/wp\/v2\/posts\/1745","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/timallanwheeler.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/timallanwheeler.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/timallanwheeler.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/timallanwheeler.com\/blog\/wp-json\/wp\/v2\/comments?post=1745"}],"version-history":[{"count":79,"href":"https:\/\/timallanwheeler.com\/blog\/wp-json\/wp\/v2\/posts\/1745\/revisions"}],"predecessor-version":[{"id":2441,"href":"https:\/\/timallanwheeler.com\/blog\/wp-json\/wp\/v2\/posts\/1745\/revisions\/2441"}],"wp:attachment":[{"href":"https:\/\/timallanwheeler.com\/blog\/wp-json\/wp\/v2\/media?parent=1745"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/timallanwheeler.com\/blog\/wp-json\/wp\/v2\/categories?post=1745"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/timallanwheeler.com\/blog\/wp-json\/wp\/v2\/tags?post=1745"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}