As realtime raytracing is slowly, but steadily, gaining traction, a range of opportunities to mix rasteration-based rendering systems with raytracing are starting to become available: hybrid raytracing where rasterisation is used to provide the hit points for the primary rays, hybrid shadows where shadowmaps are combined with raytracing to achieve smooth or higher detail shadows, hybrid antialiasing where raytracing is used to antialias the edges only, hybrid reflections, where raytracing is used to fill-in the areas that screenspace reflections can’t resolve due to lack of information.
Of these, I found the last one particularly interesting: how well can a limited information lighting technique like SSR be combined with a full-scene aware one like raytracing, so I set about exploring this further.
Continue reading “Hybrid screen-space reflections”
Last week at work a junior colleague asked me where do I get the presentations I’ve been reading from. This made me realise that, understandably, it might not be so obvious and common knowledge for people just starting graphics programming so I compiled a list of online resources I am frequently using to study the state of the art in Rendering. Continue reading “Readings on the State of the Art in Rendering”
Unless you’ve been hidden in a cave the past few months, doing your rendering with finger painting, you might have noticed that raytracing is in fashion again with both Microsoft and Apple providing official DirectX (DXR) and Metal support for it.
Of course, I was curious to try it but not having access to a DXR capable machine, I decided to extend my toy engine to add support for it using plain computer shaders instead.
I opted for a hybrid approach that combines rasterisation, for first-hit determination, with raytracing for secondary rays, for shadows/reflection/ambient occlusion etc. This approach is quite flexible as it allows us to mix and match techniques as needed, for example we can perform classic deferred shading adding raytraced ambient occlusion on top or combine raytraced reflections will screen space ambient occlusion, based on our rendering budget. Imagination has already done a lot of work on hybrid rendering, presenting a GPU which supports it in 2014. Continue reading “Hybrid raytraced shadows and reflections”
This week I had the pleasure to present the experiments I’ve doing for the past six months on GPU driven rendering at the Digital Dragons conference in Poland. The event was well organised with lots of interesting talks, and I managed to finally meet many awesome graphics people that I only knew via Twitter.
I have uploaded the presentation slides in pdf and pptx formats with speaker notes in case anyone is interested and also the modified source code I used for the experiments (I have included an executable, to compile it you will need to download NvAPI). Continue reading “GPU Driven rendering experiments at the Digital Dragons conference”
A few weeks ago I was invited by @bkaradzic to port the GPU driven occlusion culling sample to bgfx. I had heard a lot of positive things about bgfx at that point but I never got to use it myself. This write up describes the experiences and the modifications I made to my original sample to make it work with the new framework. I suggest you read the original blog posts (part1, part2) first since I won’t be delving into the technique much in this one.
Continue reading “Porting GPU driven occlusion culling to bgfx”
A few weeks ago I posted an article on how the GPU can be used to cull props, using a Hi-Z buffer of occluding geometry depths and a computer shader, and drive rendering without involving the CPU. This approach worked well but there were 2 issues that were not addressed: the first was being forced to call DrawInstancedIndirect once per prop, due to the lack of support for MultiDrawInstancedIndirect in DX11, and the second was the lack of support for mesh level-of-detail (LOD) rendering. The second point is particularly important as most games will resort to this type of mesh optimisation to improve performance. So I revisited the described GPU culling method to investigate how one could address those. As in the previous blog post, I tried to maintain the requirement for minimal art modification and content pipeline changes.
Continue reading “Experiments in GPU-based occlusion culling part 2: MultiDrawIndirect and mesh lodding”
Inspired by some awesome-looking games that have based their rendering pipeline on signed distance fields (SDFs), such as Claybook and Dreams, I decided to try some SDF rendering myself, for the first time.
Having seen some impressive shadertoy demos, I wanted to try SDFs in the context of an actual rendering engine, so I fired Unity up and modified the standard shader so that it renders SDFs to the g-buffer. The SDF implementations came mainly from these two excellent posts.
Continue reading “Deferred Signed Distance Field rendering”
Occlusion culling is a rendering optimisation technique that refers to not drawing triangles (meshes in general) that will not be visible on screen due to being occluded by (i.e. they are behind) some other solid geometry. Performing redundant shading of to-be-occluded triangles can have an impact on the GPU, such as wasted transformed vertices in the vertex shader or shaded pixels in the pixel shader, and on the CPU (performing the drawcall setup, animating skinned props etc) and should be avoided where possible.
Continue reading “Experiments in GPU-based occlusion culling”
This is part 3 of the “How Unreal Renders a Frame” series, you can access part 1 and part 2 as well.
In this blog post we are wrapping up the exploration of Unreal’s renderer with image space lighting, transparency rendering and post processing.
Continue reading “How Unreal Renders a Frame part 3”
This is part 2 of the “How Unreal Renders a Frame” series, you can access part 1 and part 3 as well.
We continue the exploration of how Unreal renders a frame by looking into light grid generation, g-prepass and lighting.
Continue reading “How Unreal Renders a Frame part 2”