It's time to start improving the code.
Let's start with that boring grey background.
First we'll change its color (and learn a bit about command buffers and render passes in the process),
then we'll get full programmatic control (and learn a bit about pipelines and shaders).
Changing the clear color: Render Passes and Command Buffers
Our first goal is to make the clear color fancier (i.e., not grey).
To do this, we'll need to modify the code in Tutorial::render.
Here's where we're starting:
Command Buffers
In Vulkan, command buffers (handle type: VkCommandBuffer) are lists of commands that the GPU can run.
To get the GPU to do anything using Vulkan, you first create a command buffer with the commands you want the GPU to run, then you submit that command buffer to one of the GPU's command queues.
The Tutorial class allocates command buffers -- one per Workspace -- and puts commands in them in the render function.
Right now it does that with a refsol:: function, but let's change that.
Remove the call to refsol::Tutorial_render_record_blank_frame, and add a call to vkResetCommandBuffer:
Resetting the command buffer clears any previously recorded commands.
However, if you were to compile and run the code now (potentially tagging the framebuffer as [[maybe_unused]] to avoid a compile warning) you'd get a bunch of complaints about the command buffer as being "unrecorded."
So let's fix that by actually recording the command buffer:
Now if you compile and run you'll get a different set of errors about the image passed to the present function not being in the proper layout.
We'll take care of this in a moment by running a render pass.
Render Passes
Let's get our image into the correct layout so we can actually see something.
To do this, we'll add code to record commands to begin (and then immediately end) a render pass.
Add this code between your command buffer begin and end functions (replacing your //TODO: put GPU commands here! comment):
Render passes (handle type VkRenderPass) describe the layout of attachments (things that get drawn to, i.e., color buffers, depth buffers, and stencil buffers -- or just images, in the Vulkan parlance), as well as how attachments are loaded before drawing, and how attachments are stored after drawing.
Basically, render passes describe how to move rendered-to image data around on a GPU during drawing.
Let's look through the VkRenderPassBeginInfo structure to see what data -- other than the pass itself -- is needed to actually begin a render pass:
.framebuffer --
while render passes describe how attachments are used they don't actually reference any specific images.
That's the job of a framebuffer (handle type VkFramebuffer).
Framebuffers provide references to the specific attachments to use in a render pass.
(This framebuffer has been conveniently configured for us in code you'll eventually get to.)
.renderArea --
the render area member specifies the pixel area that will be rendered to.
We set this to the whole size of the image being rendered, which the swapchain-management code in RTG helpfully stores in its RTG::swapchain_extent member.
(Note, also, the mix of .member = x and .member{x} styles for writing designated initializers.
These are equivalent. We tend to include the equal sign for things on one line and use curly braces when the initializer is spread over multiple lines.)
.pClearValues --
it so happens that when this render pass was created (you'll see that code later in the tutorial), we specified two attachments -- a color buffer and a depth buffer -- and that each of these attachments would be "loaded" by being cleared to a constant value; thus the array of two clear_values being supplied.
If you compile and run the code now -- and you should! -- you'll see that the background is no longer grey.
Instead, it is {1.0f, 0.0f, 1.0f, 1.0f}, which is bright magenta (clear colors use RGBA ordering):
Per-pixel Computation: Pipelines and Shaders
Now we've got a nice background color, but we're using a modern GPU!
We don't need to settle for a boring color.
So let's make the GPU draw a fancy background using its drawing capabilities.
Particularly, we're going to have the GPU draw a single triangle that covers the whole screen, and then write a fragment shader that computes colorful outputs for every screen position.
Making a Pipeline
Running the GPU's graphics pipeline requires a lot of configuration information.
In Vulkan, this information is captured in a pipeline object (handle type: VkPipeline).
Declaring BackgroundPipeline
The initialization procedure for a pipeline is verbose, and -- for other pipelines we will make -- there is a fair bit of associated data and type information.
For this reason, we'll store our pipeline in a structure inside of Tutorial.
Our structure will manage a VkPipelineLayout, giving the type of the global inputs to the pipeline, as well as a handle to the pipeline itself (a VkPipeline).
Our structure will have create and destroy functions to handle creating and destroying both the layout and the pipeline.
We will also add some placeholder comments for where (later, and in other pipelines) we'll put other data members.
Now that you've got the structure, let's call the create and destroy functions in Tutorial::Tutorial and Tutorial::~Tutorial, respectively:
We're putting the pipeline creation after the render pass is created because pipeline creation requires a render pass to describe the output attachments the pipeline will be used with.
We're putting the pipeline creation before the workspaces are created because we'll eventually create some per-pipeline, per-workspace data.
And the destruction is sequenced in the opposite order of the construction:
Implementing BackgroundPipeline
If you build now (and you should!) everything should compile fine, but you will get linker errors about missing symbols for the BackgroundPipeline::create and BackgroundPipeline::destroy functions.
So let's go ahead and write those.
In a new file, Tutorial-BackgroundPipeline.cpp, write:
Hmm, those VK_NULL_HANDLEs look like something we need to address at some point.
But for now let's get this into the build by editing Maekfile.js:
Compiling and linking should now successfully complete.
Running the program should produce the same output it did before, and you shouldn't get any complaints in the console (e.g., about failing to destroy some resources or something).
The first two commands, vkCmdSetScissor and vkCmdSetViewport,
set the scissor rectangle (the subset of the screen that gets drawn to) and the viewport transform (how device coordinates map to window coordinates) respectively.
With the given parameters, these make sure that our pipeline's output will exactly cover the swapchain image getting rendered to.
The next command, vkCmdBindPipeline, says that any subsequent drawing commands should use our freshly created background pipeline.
All three of these commands are state commands.
They are setting up parameters for subsequent action commands,
like vkCmdDraw.
This particular command runs the pipeline for -- reading the parameters -- 3 vertices and 1 instance, starting at vertex 0 and instance 0.
In other words, it draws exactly one triangle.
Why a triangle? That's because the pipeline was configured to draw a triangle.
How does it cover the whole screen with one triangle? Isn't the screen a rectangle?
We'll talk about that out shortly.
Anyway, take a break to build and run and you'll see what the background pipeline does:
A Full-screen Shader
Let's take control of what the background pipeline is actually drawing.
To draw things using the graphics pipeline on the GPU we need to provide a program to run for every vertex (a vertex shader) and a program to run for every fragment after rasterization (a fragment shader).
We write these shader programs in a C-like language called GLSL (OpenGL Shader Language) which is external to our main C++ program,
then we pass the compiled shader modules (handle type VkShaderModule when creating the pipeline.
So let's actually get those plumbed in (and address those awkward nullptrs):
This makes some static (i.e., local to this object file) buffers of SPIR-V code from .inl files (...that we haven't created yet);
turns these code buffers into shader modules (Vulkan's wrapper for a SPIR-V code buffer);
and passes them on to the refsol::'s pipeline creation function,
where they will be used as the shaders in the created pipeline.
Under the hood, the refsol pipeline creation function will substitute in from some compiled-in buffers when VK_NULL_HANDLE is passed for the module parameters.
This is why the pipeline worked for us earlier.
Okay, one quick modification to Maekfile.js:
If you run Maekfile.js now you'll get an error about missing files,
but that's expected -- we need to write the shaders!
Background Vertex Shader
A vertex shader's job is to compute vertex positions.
The GPU then assembles these vertices into primitives (in this case, triangles), clips them, and rasterizes the result to produce fragments.
But that's getting a bit ahead of ourselves.
To start, add this code to background.vert:
A vertex shader's primary goal is to set gl_Position which specifies the position of the vertex in clip space.
Typically a vertex shader uses its per vertex attributes -- such as that vertex's position in the local space of a mesh -- to set that clip position.
Because this shader's only purpose is to draw three vertices that make up a screen-covering triangle, we forego using any attributes and instead generate the three corners of our screen-covering triangle using the built-in vertex index, gl_VertexIndex.
If you think through the code, you should find that:
The outputs of the vertex shader are in clip coordinates,
which means that the triangle is clipped to the [-w,-w]x[-w,w]x[0,w] volume before the vertices are passed through the homogenous divide (divide-by-w) to get normalized device coordinates, and finally stretched out to framebuffer coordinates by the viewport transformation.
Vulkan makes the mistake of defining normalized device coordinates so that (-1,-1) is the upper left of the window and (1,1) is the lower right.
(Meaning that the "y" axis in normalized device coordinates points downward, violating mathematical convention and common sense [if you think like a mathematician].)
On the other hand, with "+z" pointing inward, this does make these coordinate systems right-handed (which makes sense if you think like a mathematician).
In the end, it's just a convention to be aware of.
And, specifically, note that OpenGL's conventions are different (using [-1,1] for the z range in clip space, having +y point up);
so using -- e.g. -- perspective matrix example code for OpenGL will will result in an incorrect transformation for Vulkan.
Background Fragment Shader
The fragment shader's job is to output a color value for a fragment. We'll calculate this fragment color entirely based on the gl_FragCoord (which gives the fragment's position in framebuffer coordinates, where one unit in x and y is one pixel) in order to make a static pattern.
Unlike in the vertex shader there is no builtin output -- we have to declare our color output with layout(location = 0) out vec4 outColor and write to it during main in order to color our fragment. Just like our clear color this is RGBA.
Now that we've got both shaders written you should be able to build and run:
It makes sense to state here that GLSL is a full-featured language with nice features for doing graphics-y stuff (like vector and matrix types and built-in interpolation functions).
For an overview, check out this quick reference card (starting on page 9), or look at the full GLSL specificaiton.
Though be aware that GLSL targeting SPIR-V for Vulkan is slightly different than GLSL for OpenGL.
Inter-shader Communication
Right now our shader computes the output color based only on the framebuffer coordinates of the fragment (i.e., pixel-center coordinates).
This means that if we resize the window the pattern stays the same size:
To fix this, we can pass a position varying from the vertex shader to the fragment shader.
A varying is any value declared out in the vertex shader and in in the fragment shader.
In our vertex shader, add this above main():
And in our fragment shader add this above main():
Varying values are interpolated between vertices to their values at fragments using the barycentric coordinates of those fragments within their progenitor triangles.
So even though we're just setting the out vec2 position varying value at the corners of the triangle, our fragments see a nice gradient of in vec2 position values.
Back to the code.
We should remember to set the position in our vertex shader (yes, I'm just going to edit the code so the varying has the same name -- position -- in both shaders now):
...and read it in our fragment shader.
If you re-build and run the code now, you'll have a boring pattern that always has black in the upper left of the window, green in the lower left, red in the upper right, and yellow in the lower right, no matter how you resize:
Adding Time: Push Constants
Our pattern now nicely resizes with the window, but it would be even cooler if we could animate it.
Unfortunately, there isn't a built-in time variable for us to rely on.
We'll have to get a time value to our fragment shader from our CPU-side code.
To do this, we'll use the GPU's ability to pass a small amount (as few as 128 bytes in some implementations!) of data to the pipeline inside a command buffer as "push constants".
In contrast, other methods of exposing data to shaders involve setting up buffers and fancy pointers to those buffers -- more complicated and overkill for our small data!
Tracking Time
Before we can pass elapsed time to our shader, we need to compute it on the CPU.
Add an float time member to `struct Tutorial` in `Tutorial.hpp`:
And add code in Tutorial::update to add the elapsed time parameter dt into our time acculator:
Getting Time in Fragment Shader with Push Constants
To let our fragment shader accept a push constant we'll add this code above main():
The layout(push_constant) layout specifier says that the coming structure will be supplied by the CPU-side code via a push constant.
The keyword uniform keyword indicates the data is always the same across all shader invocations per execution of the pipeline, which makes sense because we only push one set of values.
The identifier at the end of the line, Push, is the "block name" and is essentially vestigial.
You'll never need to refer to the block by that name inside your shader code (it's actually for reflecting about your shader in your renderer, something which is supported in OpenGL but not supported [without extra libraries] in Vulkan).
You can read more about GLSL syntax for blocks here.
After that, we have the structure declaration, which matches C syntax. Now we can refer to time simply with the identifier time inside our main function.
Let's get Verbose: Pushing the Time
Now that we've got the shaders ready to receive a push constant we need to build and push the constants from the CPU-side code.
This will eventually require us to replace refsol::'s pipeline creation function, so get your fingers ready for some extensive typing.
There's a lot of state that goes into pipeline creation.
The Push Structure
We need a CPU-side description of what we're pushing to the shader.
Let's add a Push structure to our Tutorial::BackgroundPipeline structure:
The Pipeline Layout
The first part of pipeline creation is creating a pipeline layout (handle type: VkPipelineLayout), which is, in turn, built from a list of descriptor set layouts (handle type: VkDescriptorSetLayout) and push constant ranges (no handle! just a structure: VkPushConstantRange).
If you think of a pipeline as being some sort of mysterious device in a box, then the pipeline layout is giving the shapes of the input connectors on the box.
(If you think of a pipeline as being a function, then the pipeline layout is the type of the input arguments.)
Anyway, let's write the code:
The pPushConstantRanges member of the layout create info structure tells the Vulkan driver what shader(s) are going to use what portion(s) of our push constant structure.
In this case, we supply only a single range.
We set stageFlags to VK_SHADER_STAGE_FRAGMENT_BIT because we want this push constant to be accessible in the fragment shader.
If we wanted it to be usable in the vertex shader as well we'd bitwise or VK_SHADER_STAGE_VERTEX_BIT in (and add the correct block within our vertex shader).
Drawing the Rest of the Owl
Now that we've got the pipeline layout with push constants made, we need to actually create the pipeline.
We're going to need a bunch of structures for this, so let's do them one-by-one.
Add these in order after the layout creation block in Tutorial::BackgroundPipeline::create.
The first thing we'll add is a list of the shader modules to run.
This is an array because you can add more shader modules to run in other stages:
Next up, set up the dynamic state structure to indicate that viewport and scissor for this pipeline will be set dynamically (with state commands) instead of remaining fixed:
Create a vertex input state structure that indicates there are no per-vertex inputs:
Make an input assembly state structure that tells Vulkan that the pipeline will draw triangles from a list:
Create a viewport state structure that says this pipeline uses only one viewport and one scissor rectangle:
Configure the rasterizer to cull back faces (where front faces are oriented counterclockwise), and to fill polygons:
Disable multisampling:
Don't do any depth or stencil tests, either:
And set color blending for the one color attachment to be disabled:
With all of these parameters specified, reference them all in one large parameter structure and actually create the pipeline:
You've conquered the longest stream of "just typing stuff" in the tutorial so far.
Do a victory lap by de-allocating the shader modules you made at the top of the function:
Pushing the Constants and Profiting
Two last things and you will have a nicely animated background.
First, in your render function, actually push the constants:
Second, actually do something with time in your fragment shader:
Compile and run and you'll have the pattern sweeping left to right at 1Hz: