World Engine/Game Devlog
This is the devlog for a renderer I was working on in October 2017.
On hiatus since I now work full-time at Giant Squid.
The Actual Devlog
Day 18 (10/31/17): debug draw; scene editing
Added some debug drawing functions today, and started on scene editing.
Other work is heavier this week (Sword Bros comes out in 10 days, and I have a few other deadlines), so I’ll probably be putting this project on hold for a week or two.
Day 17 (10/30/17): render target and 3d texture support; volumetric rendering research
I was back on framework features today. Specifically, I added support for creating/binding render targets (for shadow mapping, deferred effects, post-processing, etc), and started adding 3D textures.
I also read up on volumetric rendering, looking mainly at Bart Wronski’s 2014 Siggraph slides (Volumetric fog: Unified, compute shader based solution to atmospheric scattering) and Sebastien Hillaire’s 2016 Siggraph slides (Physically Based and Unified Volumetric Rendering in Frostbite). Both use 3D textures, as does voxel cone tracing, which is why I started adding those today.
10/26/17, 10/27/17: traveling
I’m away/busy today and tomorrow. More renderer stuff coming next week!
Day 16 (10/25/17): renderer now ported: lights, PBR shading, clustered forward, area lights
Now that the base Direct3D framework is working, I can start doing more interesting things again.
Today I ported over the clustered forward shading system (pbr, hdr, etc) and area lights. So now the Direct3D version has feature parity with the old bgfx version.
I also did some code cleanup, since the program was a single 3000+ line .cpp file. It wasn’t too long to be workable, but it also didn’t have any real reason to be a single file, especially since I’m adding more project-specific code now.
10/24/17: building Sword Bros sites
The pages aren’t too complex, but both required some new art assets, and the game site has some css skinned geometry that was pretty annoying to implement.
Day 15 (10/23/17): basic porting finished; the story of a bug
Short day due to other obligations/work. Mostly finished porting/rewriting a basic graphics framework, including adding depth buffer support (which I’d somehow forgotten).
Ran into an annoying bug where the depth buffer only worked in release mode. After doing all the usual memory checks, I threw together a quick test scene with one triangle behind another and jumped into RenderDoc. Turned out the buffer was working, but it was writing depths of 0 for all vertices, despite the actual vertex data being correct. Additionally, the depths written were correct when the view had zero rotation. All this suggested I was missing some info on how Direct3D handles depth. So I did some looking, and found the SlopeScaledDepthBias setting. Sure enough, SlopeScaledDepthBias is a member of D3D11_RASTERIZER_DESC (used in ID3D11Device::CreateRasterizerState), I’d overlooked the code where I use that when checking my initializations, and SlopeScaledDepthBias was getting used with an uninitialized value (which the debug build was filling with a dummy value).
Bonus: a debug shot where various things are broken.
10/20/17: day off!
It was 70 degrees and sunny in Philadelphia today, and also a Friday, so I decided to take the day off. More porting coming next week!
Day 14 (10/19/17): Direct3D11 framework, starting porting
Started porting/rewriting the basic features today: model loading, camera controls, basic shaders. Also continued on features of the D3D11 framework that I’d overlooked, namely shader constant buffers (which requires a bit more framework than OpenGL uniforms).
Hoping to get back to more exciting stuff soon. For now, here’s a shot from right after I got the new geometry import working.
Day 13 (10/18/17): Direct3D 11 (3/3)
Short day today, since I was doing a lot on other projects. On the bright side, I got the basic Direct3D framework finished, so I can start porting the renderer over tomorrow.
Day 12 (10/17/17): Direct3D 11 (2/?)
Got most of a lightweight D3D11 framework working today. The main things remaining are texture, framebuffer, and index buffer wrappers, and finishing up mesh import. Nothing too exciting, other than D3D continuing to be way nicer to learn/use than OpenGL.
Line count is ~1500, including window/input managament stuff.
Graphics api so far looks like this:
ShaderId makeShader(const char *vsText, const char *psText, const char *fileName); void setShader(const ShaderId shaderId); VertexFormatId makeVertexFormat(const VertexAttributeDesc *attributeDescs, const uint nAttribs, const ShaderId shader); void setVertexFormat(const VertexFormatId id); VertexBufferId makeVertexBuffer(const void *data, const uint32 size, const BufferAccess bufferAccess); void setVertexBuffer(const VertexBufferId vbId); void draw(const D3D_PRIMITIVE_TOPOLOGY topology, const int nVertices, const int firstVertex = 0); void drawIndexed(const D3D_PRIMITIVE_TOPOLOGY topology, const int nIndices, const int firstIndex = 0); MeshId loadMesh(const std::string path);
Day 11 (10/16/17): Direct3D 11 (goodbye bgfx); premake
I’d been having some issues with bgfx, so I decided to switch to Direct3D 11. Some reasoning:
- bgfx supports lots of different backends, so lots of fancier features are either unsupported or undocumented. For instance, I decided to do voxel cone tracing, but I’d like to use arbitrary shader texture writes for scene voxelization, and it’s unclear if bgfx supports that.
- I don’t have the bandwidth to support a multiplatform release anyway, so much of the benefit of having a cross-platform library is lost there.
- I was getting stack corruption from certain uniform setting calls, which makes the library instantly not worth it. Even if issues like this are due to something I’m doing wrong, they’re a time sink.
- bgfx is only lightly documented. Direct3D seems very well documented.
I also decided to start using a project generator, since I’m really tired of the Visual Studio property pages. CMake is the one everyone uses, but I’m using Premake for now. The CMake language is pretty unpleasant, but I’m very comfortable with Lua. Premake isn’t as mature as CMake, but it seems decently well-tested.
So today started with reading over the Premake docs, and then using it to build a project with a few external libs:
- SDL2 (platform code)
- Assimp (model loading)
- stb_image (image loading/decoding)
- glm (vectors, matrices, quaternions)
I also got started on the renderer framework, but this is my first time using Direct3D rather than OpenGL/WebGL, so that’ll probably take a bit longer. On the bright side, D3D seems way nicer to use so far that OpenGL, and decently well-documented.
Day 10 (10/13/17): global illumination research
I wrote a fair amount of code yesterday, so I spent today researching global illumination/occulsion/shadowing techniques.
I’d like to have really good (raytraced quality, ideally) shadows/occlusion around dynamic entities, since that does so much to ground objects in the world. Global illumination also seems like a good investment, and is of course related.
For starting points, I found SIGGRAPH Advances in Real-Time Rendering courses particularly useful, along with Colin Barré-Brisebois’s 2017 SIGGRAPH Open Problems presentation on the state of GI in games.
Voxel cone tracing seems like a likely candidate that would handle both (or light propagation volumes, but I’m leaning towards VCT). Or alternatively I could do SDF tracing for the shadowing/occlusion and light maps / probes (probably spherical gaussians) for GI.
I still haven’t figured out a great technique for shadowing the area lights (seems like it’s an open problem?), but I guess I can fake it for now.
On a side note, I was also reminded of Non-Linearly Quantized Moment Shadow Maps. They’re not exactly what I’m looking for, but they seem like they’ll be good if/when I end up doing shadow mapping. (I usually put off or avoid shadow mapping since it can be so fiddly/mushy).
Volumetrics and particles will also be important eventually, but those’ll depend on other lighting stuff so I’m doing them later.
Day 9 (10/12/17): clustered forward (2/2)!
Got clustered forward working today!
Things it does:
- Partition view frustum
- Cull lights per-cluster
- Retreive relevant cluster light data in shader, and perform shading
Things it doesn’t do yet:
- Correctly cull non-point lights.
It’s also completely unoptimized and only handles lights, but there’ll be time for improvement next week. For now though, I’d like to take a break from working on clustered forward.
It still looks the same (that’s the idea), so here’s a debug shot where lights are culled as if they had zero area of influence.
Day 8 (10/11/17): clustered forward (1/?): per-cluster light lists
Got the first half of clustered forward working today. The renderer now builds per-cluster light lookup tables, then processes and uses these in the lighting shader. BGFX makes you do uniform packing manually, so most of this work was doing that packing. I also spent a while trying to use int array uniforms for the packed item indices and counts, but I was only ever getting the first value through and there weren’t any used in any of the example projects, so I’m guessing those aren’t actually supported.
I’m not actually culling lights yet (so each cluster currently contains every light), but that should be quick to finish up tomorrow.
The cluster light list data format is the one from DOOM 2016, as described in their 2016 SIGGRAPH Advances slides.
Here’re some point lights rendering through the new system. The white quads are just debug markers.
Day 7 (10/10/17): clustered forward prep: point lights and light type support; proper tonemapping
I realized that what I’ve been calling “forward+” should actually be called “clustered forward.” Lots of people on the internet call it “forward+,” but the actual papers and presentations call it “clustered forward.” I’ve gone back and made the change to all past entries.
Now the actual log:
Started actually working on clustered forward today, kinda, by adding point light support. These should be more straightforward for the initial clustered light culling. Plus they’ll be important for actually making things in this engine.
I also realized that I’d forgotten the tonemapping and gamma correction when implementing the area lights. Now it uses ACES tonemapping and does proper gamma correction. It looks a lot less dark now.
Day 6 (10/9/17): clustered forward background reading
I was doing revisions on a paper today, and so didn’t have as much time to work. Spent some time reading up on clustered forward, since it’ll be my first time implementing it. Seems pretty straightforward, so I’m planning on starting tomorrow, and hopefully will have something running by later in the week.
Day 5 (10/6/17): cleanup
Did a shorter day today, cleaning up the code (was 2000+ lines pre-cleanup, mostly unnecessary or commented) and fixing various lingering bugs in the area lights implementation.
Looks the same, but now it’s much nicer behind the scenes.
Day 4 (10/5/17): area lights, day 2/2.
It took a while, but got the area lights working in the end. Turns out I’d forgotten to actually set the projection matrix and to load some of the textures, and that I was reading transforms from an uninitialized duplicate struct rather than the actual scene objects.
I should’ve built the base renderer incrementally, but I’d just done the opengl deferred one a few days ago. So I basically just wrote the code and then hit compile, but I guess figuring out bgfx took more mental energy than I accounted for. I also haven’t written the texture pre-processing yet, so textured lights look wrong. It’s basically just a gaussian clamped and renormalized inside the texture bounds though, so that shouldn’t be too much work once I get an asset pipeline going.
Day 3 (10/4/17): area lights
Today I decided to actually start working on the renderer. Eventually I’d like to try clustered forward, today I decided to take a detour into area lights. I remember seeing a cool-looking paper at SIGGRAPH, Real-Time Polygonal-Light Shading with Linearly Transformed Cosines by Heitz et al, so I spent today working on that. They provide a reference implementation, which was extremely useful. I’m using their LUT textures directly, which saves a lot of time on generating that data.
I ended up hitting some roadbumps getting up to speed with bgfx - eg it uses its own shader format, which is then compiled using an included tool - so I wasn’t able to get the lights working 100%. I think I’m close though!
Day 2 (10/3/17): bgfx integration
I’m already getting tired of writing opengl wrapper code, so I decided to integrate bgfx instead, gaining a nicer API, support for other backends, and draw call bucketing in the process. It took a bit of fiddling with build settings, but overall it was pretty quick to add and integrate with the rest of the project (just had to pass it the native window handle from SDL2).
While working on the integration, I realized that the bgfx example projects already handled all of this. So I decided to just build on top of those.
I spent the rest of the day reading through the bgfx example code and docs.
Day 1 (10/2/17): project setup; base renderer
My goal today was to get a Visual Studio project set up and to get a triangle on screen through an opengl deferred pipeline.
Deferred over requires some actual infrastructure - mesh structures with texture support, render textures, shader handling, etc. I’m not planning for or against deferred for the actual renderer, it just seemed like a good way to make sure things were working.
I also wanted to get most external code built and integrated, so I could be done with Visual Studio project stuff after today. In particular, I chose to use:
- Assimp (model loading)
- SDL2 (platform code)
- stb_image (image loading/decoding)
- FreeType (font rendering)
- GLEW (gl extensions management)
- GLM (vectors, matrices, quaternions, etc)
- LuaJIT (fast scripting)
I’d used all of these on past projects (with the exception of FreeType), and found them generally nice to work with.
This wasn’t the most enjoyable day of work, but I got everything running together in the end, and got the deferred triangle test.