Experiments in GPU-based occlusion culling part 2: MultiDrawIndirect and mesh lodding

A few weeks ago I posted an article on how the GPU can be used to cull props, using a Hi-Z buffer of occluding geometry depths and a computer shader, and drive rendering without involving the CPU. This approach worked well but there were 2 issues that were not addressed: the first was being forced to call DrawInstancedIndirect once per prop, due to the lack of support for MultiDrawInstancedIndirect in DX11, and the second was the lack of support for mesh level-of-detail (LOD) rendering. The second point is particularly important as most games will resort to this type of mesh optimisation to improve performance. So I revisited the described GPU culling method to investigate how one could address those. As in the previous blog post, I tried to maintain the requirement for minimal art modification and content pipeline changes.

Continue reading “Experiments in GPU-based occlusion culling part 2: MultiDrawIndirect and mesh lodding”

Advertisements
Experiments in GPU-based occlusion culling part 2: MultiDrawIndirect and mesh lodding

Deferred Signed Distance Field rendering

Inspired by some awesome-looking games that have based their rendering pipeline on signed distance fields (SDFs), such as Claybook and Dreams, I decided to try some SDF rendering myself, for the first time.

Having seen some impressive shadertoy demos, I wanted to try SDFs in the context of an actual rendering engine, so I fired Unity up and modified the standard shader so that it renders SDFs to the g-buffer. The SDF implementations came mainly from these two excellent posts.

Continue reading “Deferred Signed Distance Field rendering”

Deferred Signed Distance Field rendering

Experiments in GPU-based occlusion culling

Occlusion culling is a rendering optimisation technique that refers to not drawing triangles (meshes in general) that will not be visible on screen due to being occluded by (i.e. they are behind) some other solid geometry. Performing redundant shading of to-be-occluded triangles can have an impact on the GPU, such as wasted transformed vertices in the vertex shader or shaded pixels in the pixel shader, and on the CPU (performing the drawcall setup, animating skinned props etc) and should be avoided where possible.

Continue reading “Experiments in GPU-based occlusion culling”

Experiments in GPU-based occlusion culling

How Unreal Renders a Frame

This is part 1 of the “How Unreal Renders a Frame” series, you can access part 2 and part 3 as well.

I was looking around the Unreal source the other day and inspired by some excellent breakdowns of how popular games render a frame, I thought to try something similar with it as well, to study how it renders a frame (with the default settings/scene setup).

Continue reading “How Unreal Renders a Frame”

How Unreal Renders a Frame

Adventures in postprocessing with Unity

Some time ago I did an investigation on if/how Unity can be used as a FX Composer replacement, using the free version as a test. I then concluded that to a large degree Unity could be used for shader prototyping. It was missing the low level access though that would allow me implement more complicated graphics techniques, so I jumped onto SharpDX for a couple years.

Developing code is good but sometimes you just need to drag and drop a few assets, attach a shader and get going. Now that Unity is available fully featured for free, it was time to give it another go.  In this post I document my findings.

Continue reading “Adventures in postprocessing with Unity”

Adventures in postprocessing with Unity