I’m making more progress on the Vulkan prototype. Now the SceneEngine compiles, BufferUploads works, and it’s now possible to render basic geometry. Many features aren’t supported. But the core rendering features (such as binding textures, compiling shaders, pushing geometry through the pipeline) have an early implementation.
So, while a working Vulkan version is now close, I have a much better idea of what major hurdles there would be to get a really streamlined and efficient Vulkan implementation!
After 1 week of playing around with Vulkan, here are 4 tips I’ve learned. These are maybe not very obvious (some aren’t clearly documented), but I think they’re important to know.
1. Use buffers as staging areas to initialize images The crux of this is we need to use a function, vkCmdCopyBufferToImage, to implement staging images with Vulkan. It’s almost impossible to do any graphics work with Vulkan without doing this – but a first glance it might seem a bit counter-intuitive.
So, the Vulkan prototype is progress… But I’m running into many problems working with the drivers and the associated tools. Here’s some examples of the problems I’m finding.
RenderDoc crashing RenderDoc is the best tool for debugging Vulkan at the moment… But every time I tried to capture a log, it just crashed! The crash report didn’t contain any useful information. All I could do was guess at the problem.
As part of the Vulkan prototype, I’m experimenting with compiling HLSL shaders to SPIR-V.
The Vulkan API doesn’t have a high level shader language attached. Instead, it works with a new intermediate bytecode format, called SPIR-V. This works with the LLVM compiler system, so in theory we plug in different front ends to allow various high level languages to compile to SPIR-V.
That sounds great for the future… But right now, there doesn’t seem to be a great path for generating the SPIR-V bytecode.
So, it’s a simple story – boy meets API, yada, yada, yada…
I’ve started to build some initial experiments with the Vulkan API. Version 1.0 of the API was just released – and there’s an SDK from Valve (called LunarG) around.
Initial impressions My first impressions are very positive! Many of the design ideals and structures of the API are familiar from my days working with consoles (particularly the Sony consoles).
There are has been a lot research on order independent transparency recently. There are a few screenshots comparing the following methods:
Sorted – this mode sorts back-to-front per fragment. It’s very expensive, but serves as a reference. Stochastic – as per nVidia’s Stochastic Transparency research. This uses MSAA hardware to estimate the optical depth covering a given sample. Depth weighted – as per nVidia’s other white paper, “A Phenomenological Scattering Model for Order-Independent Transparency.
Here are a few screenshots of environment rendering in XLE. I don’t have a lot of art I can take screenshots of, so I can’t show a very polished scene… But you can see some of the rendering features.
Look for:
shallow water surface animation order independent blending for foliage volumetric fog (& related effects) dynamic imposters for distant trees high res terrain geometry terrain decoration spawn “contact hardening” shadows infinite terrain shadows (see older screenshots here)