A Colorful Background
It's time to start improving the code. Let's start with that boring grey background. First we'll change its color (and learn a bit about command buffers and render passes in the process), then we'll get full programmatic control (and learn a bit about pipelines and shaders).
Changing the clear color: Render Passes and Command Buffers
Our first goal is to make the clear color fancier (i.e., not grey).
To do this, we'll need to modify the code in Tutorial::render
.
Here's where we're starting:
void Tutorial::render(RTG &rtg_, RTG::RenderParams const &render_params) {
//assert that parameters are valid:
assert(&rtg == &rtg_);
assert(render_params.workspace_index < workspaces.size());
assert(render_params.image_index < swapchain_framebuffers.size());
//get more convenient names for the current workspace and target framebuffer:
Workspace &workspace = workspaces[render_params.workspace_index];
VkFramebuffer framebuffer = swapchain_framebuffers[render_params.image_index];
//record (into `workspace.command_buffer`) commands that run a `render_pass` that just clears `framebuffer`:
refsol::Tutorial_render_record_blank_frame(rtg, render_pass, framebuffer, &workspace.command_buffer);
//submit `workspace.command buffer` for the GPU to run:
refsol::Tutorial_render_submit(rtg, render_params, workspace.command_buffer);
}
Tutorial::render
function, in Tutorial.cpp
, before modification.
Command Buffers
In Vulkan, command buffers (handle type: VkCommandBuffer
) are lists of commands that the GPU can run.
To get the GPU to do anything using Vulkan, you first create a command buffer with the commands you want the GPU to run, then you submit that command buffer to one of the GPU's command queues.
The Tutorial
class allocates command buffers -- one per Workspace
-- and puts commands in them in the render
function.
Right now it does that with a refsol::
function, but let's change that.
Remove the call to refsol::Tutorial_render_record_blank_frame
, and add a call to vkResetCommandBuffer
:
Tutorial.cpp
void Tutorial::render(RTG &rtg_, RTG::RenderParams const &render_params) {
//...
//record (into `workspace.command_buffer`) commands that run a `render_pass` that just clears `framebuffer`:
refsol::Tutorial_render_record_blank_frame(rtg, render_pass, framebuffer, &workspace.command_buffer);
//reset the command buffer (clear old commands):
VK( vkResetCommandBuffer(workspace.command_buffer, 0) );
//submit `workspace.command buffer` for the GPU to run:
refsol::Tutorial_render_submit(rtg, render_params, workspace.command_buffer);
}
Resetting the command buffer clears any previously recorded commands.
However, if you were to compile and run the code now (potentially tagging the framebuffer
as [[maybe_unused]]
to avoid a compile warning) you'd get a bunch of complaints about the command buffer as being "unrecorded."
So let's fix that by actually recording the command buffer:
Tutorial.cpp
void Tutorial::render(RTG &rtg_, RTG::RenderParams const &render_params) {
//...
//reset the command buffer (clear old commands):
VK( vkResetCommandBuffer(workspace.command_buffer, 0) );
{ //begin recording:
VkCommandBufferBeginInfo begin_info{
.sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_BEGIN_INFO,
.flags = VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT, //will record again every submit
};
VK( vkBeginCommandBuffer(workspace.command_buffer, &begin_info) );
}
//TODO: put GPU commands here!
//end recording:
VK( vkEndCommandBuffer(workspace.command_buffer) );
//...
}
Now if you compile and run you'll get a different set of errors about the image passed to the present function not being in the proper layout. We'll take care of this in a moment by running a render pass.
Render Passes
Let's get our image into the correct layout so we can actually see something. To do this, we'll add code to record commands to begin (and then immediately end) a render pass.
Add this code between your command buffer begin and end functions (replacing your //TODO: put GPU commands here!
comment):
{ //render pass
std::array< VkClearValue, 2 > clear_values{
VkClearValue{ .color{ .float32{1.0f, 0.0f, 1.0f, 1.0f} } },
VkClearValue{ .depthStencil{ .depth = 1.0f, .stencil = 0 } },
};
VkRenderPassBeginInfo begin_info{
.sType = VK_STRUCTURE_TYPE_RENDER_PASS_BEGIN_INFO,
.renderPass = render_pass,
.framebuffer = framebuffer,
.renderArea{
.offset = {.x = 0, .y = 0},
.extent = rtg.swapchain_extent,
},
.clearValueCount = uint32_t(clear_values.size()),
.pClearValues = clear_values.data(),
};
vkCmdBeginRenderPass(workspace.command_buffer, &begin_info, VK_SUBPASS_CONTENTS_INLINE);
//TODO: run pipelines here
vkCmdEndRenderPass(workspace.command_buffer);
}
Render passes (handle type VkRenderPass
) describe the layout of attachments (things that get drawn to, i.e., color buffers, depth buffers, and stencil buffers -- or just images, in the Vulkan parlance), as well as how attachments are loaded before drawing, and how attachments are stored after drawing.
Basically, render passes describe how to move rendered-to image data around on a GPU during drawing.
Let's look through the VkRenderPassBeginInfo
structure to see what data -- other than the pass itself -- is needed to actually begin a render pass:
.framebuffer
-- while render passes describe how attachments are used they don't actually reference any specific images. That's the job of a framebuffer (handle typeVkFramebuffer
). Framebuffers provide references to the specific attachments to use in a render pass. (This framebuffer has been conveniently configured for us in code you'll eventually get to.).renderArea
-- the render area member specifies the pixel area that will be rendered to. We set this to the whole size of the image being rendered, which the swapchain-management code inRTG
helpfully stores in itsRTG::swapchain_extent
member. (Note, also, the mix of.member = x
and.member{x}
styles for writing designated initializers. These are equivalent. We tend to include the equal sign for things on one line and use curly braces when the initializer is spread over multiple lines.).pClearValues
-- it so happens that when this render pass was created (you'll see that code later in the tutorial), we specified two attachments -- a color buffer and a depth buffer -- and that each of these attachments would be "loaded" by being cleared to a constant value; thus the array of twoclear_values
being supplied.
If you compile and run the code now -- and you should! -- you'll see that the background is no longer grey.
Instead, it is {1.0f, 0.0f, 1.0f, 1.0f}
, which is bright magenta (clear colors use RGBA ordering):

Per-pixel Computation: Pipelines and Shaders
Now we've got a nice background color, but we're using a modern GPU! We don't need to settle for a boring color. So let's make the GPU draw a fancy background using its drawing capabilities.
Particularly, we're going to have the GPU draw a single triangle that covers the whole screen, and then write a fragment shader that computes colorful outputs for every screen position.
Making a Pipeline
Running the GPU's graphics pipeline requires a lot of configuration information.
In Vulkan, this information is captured in a pipeline object (handle type: VkPipeline
).
Declaring BackgroundPipeline
The initialization procedure for a pipeline is verbose, and -- for other pipelines we will make -- there is a fair bit of associated data and type information.
For this reason, we'll store our pipeline in a structure inside of Tutorial
.
Our structure will manage a VkPipelineLayout
, giving the type of the global inputs to the pipeline, as well as a handle to the pipeline itself (a VkPipeline
).
Our structure will have create
and destroy
functions to handle creating and destroying both the layout and the pipeline.
We will also add some placeholder comments for where (later, and in other pipelines) we'll put other data members.
struct Tutorial : RTG::Application {
//...
//Pipelines:
//TODO
struct BackgroundPipeline {
//no descriptor set layouts
//no push constants
VkPipelineLayout layout = VK_NULL_HANDLE;
//no vertex bindings
VkPipeline handle = VK_NULL_HANDLE;
void create(RTG &, VkRenderPass render_pass, uint32_t subpass);
void destroy(RTG &);
} background_pipeline;
//...
}
BackgroundPipeline
member structure to Tutorial.hpp
.
Now that you've got the structure, let's call the create
and destroy
functions in Tutorial::Tutorial
and Tutorial::~Tutorial
, respectively:
Tutorial.cpp
Tutorial::Tutorial(RTG &rtg_) : rtg(rtg_) {
refsol::Tutorial_constructor(rtg, &depth_format, &render_pass, &command_pool);
background_pipeline.create(rtg, render_pass, 0);
workspaces.resize(rtg.workspaces.size());
for (Workspace &workspace : workspaces) {
refsol::Tutorial_constructor_workspace(rtg, command_pool, &workspace.command_buffer);
}
}
BackgroundPipeline::create
from Tutorial::Tutorial
.
We're putting the pipeline creation after the render pass is created because pipeline creation requires a render pass to describe the output attachments the pipeline will be used with. We're putting the pipeline creation before the workspaces are created because we'll eventually create some per-pipeline, per-workspace data.
And the destruction is sequenced in the opposite order of the construction:
Tutorial.cpp
//in Tutorial::~Tutorial:
for (Workspace &workspace : workspaces) {
refsol::Tutorial_destructor_workspace(rtg, command_pool, &workspace.command_buffer);
}
workspaces.clear();
background_pipeline.destroy(rtg);
refsol::Tutorial_destructor(rtg, &render_pass, &command_pool);
BackgroundPipeline::destroy
from Tutorial::~Tutorial
.
Implementing BackgroundPipeline
If you build now (and you should!) everything should compile fine, but you will get linker errors about missing symbols for the BackgroundPipeline::create
and BackgroundPipeline::destroy
functions.
So let's go ahead and write those.
In a new file, Tutorial-BackgroundPipeline.cpp
, write:
#include "Tutorial.hpp"
#include "Helpers.hpp"
#include "refsol.hpp"
void Tutorial::BackgroundPipeline::create(RTG &rtg, VkRenderPass render_pass, uint32_t subpass) {
VkShaderModule vert_module = VK_NULL_HANDLE;
VkShaderModule frag_module = VK_NULL_HANDLE;
refsol::BackgroundPipeline_create(rtg, render_pass, subpass, vert_module, frag_module, &layout, &handle);
}
void Tutorial::BackgroundPipeline::destroy(RTG &rtg) {
refsol::BackgroundPipeline_destroy(rtg, &layout, &handle);
}
refsol::
functions!
Hmm, those VK_NULL_HANDLE
s look like something we need to address at some point.
But for now let's get this into the build by editing Maekfile.js
:
Maekfile.js
:
//uncomment to build background shaders and pipeline:
const background_shaders = [
// maek.GLSLC('background.vert'),
// maek.GLSLC('background.frag'),
];
main_objs.push( maek.CPP('Tutorial-BackgroundPipeline.cpp', undefined, { depends:[...background_shaders] } ) );
Maekfile.js
-- notice that the maek.GLSLC
lines remain commented. We haven't written those shaders yet.
Compiling and linking should now successfully complete. Running the program should produce the same output it did before, and you shouldn't get any complaints in the console (e.g., about failing to destroy some resources or something).
Running the Pipeline
Okay, let's see what this pipeline does.
Returning to the Tutorial::render
function, add this code between your vkCmdBeginRenderPass
and vkCmdEndRenderPass
commands:
Tutorial.cpp
//...
vkCmdBeginRenderPass(workspace.command_buffer, &begin_info, VK_SUBPASS_CONTENTS_INLINE);
//TODO: run pipelines here
{ //set scissor rectangle:
VkRect2D scissor{
.offset = {.x = 0, .y = 0},
.extent = rtg.swapchain_extent,
};
vkCmdSetScissor(workspace.command_buffer, 0, 1, &scissor);
}
{ //configure viewport transform:
VkViewport viewport{
.x = 0.0f,
.y = 0.0f,
.width = float(rtg.swapchain_extent.width),
.height = float(rtg.swapchain_extent.height),
.minDepth = 0.0f,
.maxDepth = 1.0f,
};
vkCmdSetViewport(workspace.command_buffer, 0, 1, &viewport);
}
{ //draw with the background pipeline:
vkCmdBindPipeline(workspace.command_buffer, VK_PIPELINE_BIND_POINT_GRAPHICS, background_pipeline.handle);
vkCmdDraw(workspace.command_buffer, 3, 1, 0, 0);
}
vkCmdEndRenderPass(workspace.command_buffer);
//...
The first two commands, vkCmdSetScissor
and vkCmdSetViewport
,
set the scissor rectangle (the subset of the screen that gets drawn to) and the viewport transform (how device coordinates map to window coordinates) respectively.
With the given parameters, these make sure that our pipeline's output will exactly cover the swapchain image getting rendered to.
The next command, vkCmdBindPipeline
, says that any subsequent drawing commands should use our freshly created background pipeline.
All three of these commands are state commands.
They are setting up parameters for subsequent action commands,
like vkCmdDraw
.
This particular command runs the pipeline for -- reading the parameters -- 3
vertices and 1
instance, starting at vertex 0
and instance 0
.
In other words, it draws exactly one triangle.
Why a triangle? That's because the pipeline was configured to draw a triangle. How does it cover the whole screen with one triangle? Isn't the screen a rectangle? We'll talk about that out shortly.
Anyway, take a break to build and run and you'll see what the background pipeline does:

refsol::
-supplied background pipeline draws a gradient.
A Full-screen Shader
Let's take control of what the background pipeline is actually drawing.
To draw things using the graphics pipeline on the GPU we need to provide a program to run for every vertex (a vertex shader) and a program to run for every fragment after rasterization (a fragment shader).
We write these shader programs in a C-like language called GLSL (OpenGL Shader Language) which is external to our main C++ program,
then we pass the compiled shader modules (handle type VkShaderModule
when creating the pipeline.
So let's actually get those plumbed in (and address those awkward nullptr
s):
Tutorial-BackgroundPipeline.cpp
static uint32_t vert_code[] =
#include "spv/background.vert.inl"
;
static uint32_t frag_code[] =
#include "spv/background.frag.inl"
;
void Tutorial::BackgroundPipeline::create(RTG &rtg, VkRenderPass render_pass, uint32_t subpass) {
VkShaderModule vert_module = rtg.helpers.create_shader_module(vert_code);
VkShaderModule frag_module = rtg.helpers.create_shader_module(frag_code);
refsol::BackgroundPipeline_create(rtg, render_pass, subpass, vert_module, frag_module, &layout, &handle);
}
This makes some static (i.e., local to this object file) buffers of SPIR-V code from .inl
files (...that we haven't created yet);
turns these code buffers into shader modules (Vulkan's wrapper for a SPIR-V code buffer);
and passes them on to the refsol::
's pipeline creation function,
where they will be used as the shaders in the created pipeline.
Under the hood, the refsol pipeline creation function will substitute in from some compiled-in buffers when VK_NULL_HANDLE
is passed for the module parameters.
This is why the pipeline worked for us earlier.
Okay, one quick modification to Maekfile.js
:
const background_shaders = [
maek.GLSLC('background.vert'),
maek.GLSLC('background.frag'),
];
Maekfile.js
to build the background shaders.
If you run Maekfile.js
now you'll get an error about missing files,
but that's expected -- we need to write the shaders!
Background Vertex Shader
A vertex shader's job is to compute vertex positions. The GPU then assembles these vertices into primitives (in this case, triangles), clips them, and rasterizes the result to produce fragments. But that's getting a bit ahead of ourselves.
To start, add this code to background.vert
:
background.vert
(new file):
#version 450 //GLSL version 4.5
void main() {
vec2 POSITION = vec2(2 * (gl_VertexIndex & 2) - 1, 4 * (gl_VertexIndex & 1) - 1);
gl_Position = vec4(POSITION, 0.0, 1.0);
}
A vertex shader's primary goal is to set gl_Position
which specifies the position of the vertex in clip space.
Typically a vertex shader uses its per vertex attributes -- such as that vertex's position in the local space of a mesh -- to set that clip position.
Because this shader's only purpose is to draw three vertices that make up a screen-covering triangle, we forego using any attributes and instead generate the three corners of our screen-covering triangle using the built-in vertex index, gl_VertexIndex
.
If you think through the code, you should find that:
// gl_VertexIndex == 0 => gl_Position == vec4(-1, -1, 0, 1)
// gl_VertexIndex == 1 => gl_Position == vec4(-1, 3, 0, 1)
// gl_VertexIndex == 2 => gl_Position == vec4( 3, -1, 0, 1)
The outputs of the vertex shader are in clip coordinates, which means that the triangle is clipped to the [-w,-w]x[-w,w]x[0,w] volume before the vertices are passed through the homogenous divide (divide-by-w) to get normalized device coordinates, and finally stretched out to framebuffer coordinates by the viewport transformation.

Vulkan makes the mistake of defining normalized device coordinates so that (-1,-1) is the upper left of the window and (1,1) is the lower right. (Meaning that the "y" axis in normalized device coordinates points downward, violating mathematical convention and common sense [if you think like a mathematician].) On the other hand, with "+z" pointing inward, this does make these coordinate systems right-handed (which makes sense if you think like a mathematician).
In the end, it's just a convention to be aware of. And, specifically, note that OpenGL's conventions are different (using [-1,1] for the z range in clip space, having +y point up); so using -- e.g. -- perspective matrix example code for OpenGL will will result in an incorrect transformation for Vulkan.
Background Fragment Shader
The fragment shader's job is to output a color value for a fragment. We'll calculate this fragment color entirely based on the gl_FragCoord
(which gives the fragment's position in framebuffer coordinates, where one unit in x and y is one pixel) in order to make a static pattern.
background.frag
(new file):
#version 450
layout(location = 0) out vec4 outColor;
void main() {
outColor = vec4( fract(gl_FragCoord.x / 100), gl_FragCoord.y / 400, 0.2, 1.0 );
}
Unlike in the vertex shader there is no builtin output -- we have to declare our color output with layout(location = 0) out vec4 outColor
and write to it during main
in order to color our fragment. Just like our clear color this is RGBA.
Now that we've got both shaders written you should be able to build and run:

It makes sense to state here that GLSL is a full-featured language with nice features for doing graphics-y stuff (like vector and matrix types and built-in interpolation functions). For an overview, check out this quick reference card (starting on page 9), or look at the full GLSL specificaiton. Though be aware that GLSL targeting SPIR-V for Vulkan is slightly different than GLSL for OpenGL.
Inter-shader Communication
Right now our shader computes the output color based only on the framebuffer coordinates of the fragment (i.e., pixel-center coordinates). This means that if we resize the window the pattern stays the same size:


To fix this, we can pass a position varying from the vertex shader to the fragment shader.
A varying is any value declared out
in the vertex shader and in
in the fragment shader.
In our vertex shader, add this above main()
:
background.vert
layout(location = 0) out vec2 position;
And in our fragment shader add this above main()
:
background.frag
layout(location = 0) in vec2 position;
Varying values are interpolated between vertices to their values at fragments using the barycentric coordinates of those fragments within their progenitor triangles.
So even though we're just setting the out vec2 position
varying value at the corners of the triangle, our fragments see a nice gradient of in vec2 position
values.
Back to the code.
We should remember to set the position
in our vertex shader (yes, I'm just going to edit the code so the varying has the same name -- position
-- in both shaders now):
background.vert
position = POSITION * 0.5 + 0.5; //make the screen [0,1]x[0,1]
...and read it in our fragment shader.
background.frag
outColor = vec4(position, 0.0, 1.0);
If you re-build and run the code now, you'll have a boring pattern that always has black in the upper left of the window, green in the lower left, red in the upper right, and yellow in the lower right, no matter how you resize:


Adding Time: Push Constants
Our pattern now nicely resizes with the window, but it would be even cooler if we could animate it. Unfortunately, there isn't a built-in time variable for us to rely on. We'll have to get a time value to our fragment shader from our CPU-side code. To do this, we'll use the GPU's ability to pass a small amount (as few as 128 bytes in some implementations!) of data to the pipeline inside a command buffer as "push constants". In contrast, other methods of exposing data to shaders involve setting up buffers and fancy pointers to those buffers -- more complicated and overkill for our small data!
Tracking Time
Before we can pass elapsed time to our shader, we need to compute it on the CPU.
Add an float time
member to `struct Tutorial` in `Tutorial.hpp`:
Tutorial.hpp
//--------------------------------------------------------------------
//Resources that change when time passes or the user interacts:
virtual void update(float dt) override;
virtual void on_input(InputEvent const &) override;
float time = 0.0f;
And add code in Tutorial::update
to add the elapsed time parameter dt
into our time acculator:
Tutorial.cpp
void Tutorial::update(float dt) {
time += dt;
}
Getting Time in Fragment Shader with Push Constants
To let our fragment shader accept a push constant we'll add this code above main()
:
background.frag
layout(push_constant) uniform Push {
float time;
};
The layout(push_constant)
layout specifier says that the coming structure will be supplied by the CPU-side code via a push constant.
The keyword uniform
keyword indicates the data is always the same across all shader invocations per execution of the pipeline, which makes sense because we only push one set of values.
The identifier at the end of the line, Push
, is the "block name" and is essentially vestigial.
You'll never need to refer to the block by that name inside your shader code (it's actually for reflecting about your shader in your renderer, something which is supported in OpenGL but not supported [without extra libraries] in Vulkan).
You can read more about GLSL syntax for blocks here.
After that, we have the structure declaration, which matches C syntax. Now we can refer to time simply with the identifier time
inside our main function.
Let's get Verbose: Pushing the Time
Now that we've got the shaders ready to receive a push constant we need to build and push the constants from the CPU-side code.
This will eventually require us to replace refsol::
's pipeline creation function, so get your fingers ready for some extensive typing.
There's a lot of state that goes into pipeline creation.
The Push
Structure
We need a CPU-side description of what we're pushing to the shader.
Let's add a Push
structure to our Tutorial::BackgroundPipeline
structure:
Tutorial.hpp
struct BackgroundPipeline {
//no descriptor set layouts
//no push constants
struct Push {
float time;
};
VkPipelineLayout layout = VK_NULL_HANDLE;
//...
} background_pipeline;
The Pipeline Layout
The first part of pipeline creation is creating a pipeline layout (handle type: VkPipelineLayout
), which is, in turn, built from a list of descriptor set layouts (handle type: VkDescriptorSetLayout
) and push constant ranges (no handle! just a structure: VkPushConstantRange
).
If you think of a pipeline as being some sort of mysterious device in a box, then the pipeline layout is giving the shapes of the input connectors on the box. (If you think of a pipeline as being a function, then the pipeline layout is the type of the input arguments.)
Anyway, let's write the code:
Tutorial-BackgroundPipeline.cpp
//...
#include "refsol.hpp"
#include "VK.hpp"
//...
void Tutorial::BackgroundPipeline::create(RTG &rtg, VkRenderPass render_pass, uint32_t subpass) {
VkShaderModule vert_module = rtg.helpers.create_shader_module(vert_code);
VkShaderModule frag_module = rtg.helpers.create_shader_module(frag_code);
refsol::BackgroundPipeline_create(rtg, render_pass, subpass, vert_module, frag_module, &layout, &handle);
{ //create pipeline layout:
VkPushConstantRange range{
.stageFlags = VK_SHADER_STAGE_FRAGMENT_BIT,
.offset = 0,
.size = sizeof(Push),
};
VkPipelineLayoutCreateInfo create_info{
.sType = VK_STRUCTURE_TYPE_PIPELINE_LAYOUT_CREATE_INFO,
.setLayoutCount = 0,
.pSetLayouts = nullptr,
.pushConstantRangeCount = 1,
.pPushConstantRanges = &range,
};
VK( vkCreatePipelineLayout(rtg.device, &create_info, nullptr, &layout) );
}
//...more code to come...
}
The pPushConstantRanges
member of the layout create info structure tells the Vulkan driver what shader(s) are going to use what portion(s) of our push constant structure.
In this case, we supply only a single range.
We set stageFlags
to VK_SHADER_STAGE_FRAGMENT_BIT
because we want this push constant to be accessible in the fragment shader.
If we wanted it to be usable in the vertex shader as well we'd bitwise or VK_SHADER_STAGE_VERTEX_BIT
in (and add the correct block within our vertex shader).
Drawing the Rest of the Owl
Now that we've got the pipeline layout with push constants made, we need to actually create the pipeline. We're going to need a bunch of structures for this, so let's do them one-by-one.
Add these in order after the layout creation block in Tutorial::BackgroundPipeline::create
.
The first thing we'll add is a list of the shader modules to run.
This is an array because you can add more shader modules to run in other stages:
Tutorial-BackgroundPipeline.cpp
void Tutorial::BackgroundPipeline::create(RTG &rtg, VkRenderPass render_pass, uint32_t subpass) {
//...
{ //create pipeline:
//shader code for vertex and fragment pipeline stages:
std::array< VkPipelineShaderStageCreateInfo, 2 > stages{
VkPipelineShaderStageCreateInfo{
.sType = VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO,
.stage = VK_SHADER_STAGE_VERTEX_BIT,
.module = vert_module,
.pName = "main"
},
VkPipelineShaderStageCreateInfo{
.sType = VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO,
.stage = VK_SHADER_STAGE_FRAGMENT_BIT,
.module = frag_module,
.pName = "main"
},
};
//more to come...
}
}
Next up, set up the dynamic state structure to indicate that viewport and scissor for this pipeline will be set dynamically (with state commands) instead of remaining fixed:
Tutorial-BackgroundPipeline.cpp
void Tutorial::BackgroundPipeline::create(RTG &rtg, VkRenderPass render_pass, uint32_t subpass) {
//...
{ //create pipeline:
//...
//the viewport and scissor state will be set at runtime for the pipeline:
std::vector< VkDynamicState > dynamic_states{
VK_DYNAMIC_STATE_VIEWPORT,
VK_DYNAMIC_STATE_SCISSOR
};
VkPipelineDynamicStateCreateInfo dynamic_state{
.sType = VK_STRUCTURE_TYPE_PIPELINE_DYNAMIC_STATE_CREATE_INFO,
.dynamicStateCount = uint32_t(dynamic_states.size()),
.pDynamicStates = dynamic_states.data()
};
//more to come...
}
}
Create a vertex input state structure that indicates there are no per-vertex inputs:
Tutorial-BackgroundPipeline.cpp
void Tutorial::BackgroundPipeline::create(RTG &rtg, VkRenderPass render_pass, uint32_t subpass) {
//...
{ //create pipeline:
//...
//this pipeline will take no per-vertex inputs:
VkPipelineVertexInputStateCreateInfo vertex_input_state{
.sType = VK_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_STATE_CREATE_INFO,
.vertexBindingDescriptionCount = 0,
.pVertexBindingDescriptions = nullptr,
.vertexAttributeDescriptionCount = 0,
.pVertexAttributeDescriptions = nullptr,
};
//more to come...
}
}
Make an input assembly state structure that tells Vulkan that the pipeline will draw triangles from a list:
Tutorial-BackgroundPipeline.cpp
void Tutorial::BackgroundPipeline::create(RTG &rtg, VkRenderPass render_pass, uint32_t subpass) {
//...
{ //create pipeline:
//...
//this pipeline will draw triangles:
VkPipelineInputAssemblyStateCreateInfo input_assembly_state{
.sType = VK_STRUCTURE_TYPE_PIPELINE_INPUT_ASSEMBLY_STATE_CREATE_INFO,
.topology = VK_PRIMITIVE_TOPOLOGY_TRIANGLE_LIST,
.primitiveRestartEnable = VK_FALSE
};
//more to come...
}
}
Create a viewport state structure that says this pipeline uses only one viewport and one scissor rectangle:
Tutorial-BackgroundPipeline.cpp
void Tutorial::BackgroundPipeline::create(RTG &rtg, VkRenderPass render_pass, uint32_t subpass) {
//...
{ //create pipeline:
//...
//this pipeline will render to one viewport and scissor rectangle:
VkPipelineViewportStateCreateInfo viewport_state{
.sType = VK_STRUCTURE_TYPE_PIPELINE_VIEWPORT_STATE_CREATE_INFO,
.viewportCount = 1,
.scissorCount = 1,
};
//more to come...
}
}
Configure the rasterizer to cull back faces (where front faces are oriented counterclockwise), and to fill polygons:
Tutorial-BackgroundPipeline.cpp
void Tutorial::BackgroundPipeline::create(RTG &rtg, VkRenderPass render_pass, uint32_t subpass) {
//...
{ //create pipeline:
//...
//the rasterizer will cull back faces and fill polygons:
VkPipelineRasterizationStateCreateInfo rasterization_state{
.sType = VK_STRUCTURE_TYPE_PIPELINE_RASTERIZATION_STATE_CREATE_INFO,
.depthClampEnable = VK_FALSE,
.rasterizerDiscardEnable = VK_FALSE,
.polygonMode = VK_POLYGON_MODE_FILL,
.cullMode = VK_CULL_MODE_BACK_BIT,
.frontFace = VK_FRONT_FACE_COUNTER_CLOCKWISE,
.depthBiasEnable = VK_FALSE,
.lineWidth = 1.0f,
};
//more to come...
}
}
Disable multisampling:
Tutorial-BackgroundPipeline.cpp
void Tutorial::BackgroundPipeline::create(RTG &rtg, VkRenderPass render_pass, uint32_t subpass) {
//...
{ //create pipeline:
//...
//multisampling will be disabled (one sample per pixel):
VkPipelineMultisampleStateCreateInfo multisample_state{
.sType = VK_STRUCTURE_TYPE_PIPELINE_MULTISAMPLE_STATE_CREATE_INFO,
.rasterizationSamples = VK_SAMPLE_COUNT_1_BIT,
.sampleShadingEnable = VK_FALSE,
};
//more to come...
}
}
Don't do any depth or stencil tests, either:
Tutorial-BackgroundPipeline.cpp
void Tutorial::BackgroundPipeline::create(RTG &rtg, VkRenderPass render_pass, uint32_t subpass) {
//...
{ //create pipeline:
//...
//depth and stencil tests will be disabled:
VkPipelineDepthStencilStateCreateInfo depth_stencil_state{
.sType = VK_STRUCTURE_TYPE_PIPELINE_DEPTH_STENCIL_STATE_CREATE_INFO,
.depthTestEnable = VK_FALSE,
.depthBoundsTestEnable = VK_FALSE,
.stencilTestEnable = VK_FALSE,
};
//more to come...
}
}
And set color blending for the one color attachment to be disabled:
Tutorial-BackgroundPipeline.cpp
void Tutorial::BackgroundPipeline::create(RTG &rtg, VkRenderPass render_pass, uint32_t subpass) {
//...
{ //create pipeline:
//...
//there will be one color attachment with blending disabled:
std::array< VkPipelineColorBlendAttachmentState, 1 > attachment_states{
VkPipelineColorBlendAttachmentState{
.blendEnable = VK_FALSE,
.colorWriteMask = VK_COLOR_COMPONENT_R_BIT | VK_COLOR_COMPONENT_G_BIT | VK_COLOR_COMPONENT_B_BIT | VK_COLOR_COMPONENT_A_BIT,
},
};
VkPipelineColorBlendStateCreateInfo color_blend_state{
.sType = VK_STRUCTURE_TYPE_PIPELINE_COLOR_BLEND_STATE_CREATE_INFO,
.logicOpEnable = VK_FALSE,
.attachmentCount = uint32_t(attachment_states.size()),
.pAttachments = attachment_states.data(),
.blendConstants{0.0f, 0.0f, 0.0f, 0.0f},
};
//more to come...
}
}
With all of these parameters specified, reference them all in one large parameter structure and actually create the pipeline:
Tutorial-BackgroundPipeline.cpp
void Tutorial::BackgroundPipeline::create(RTG &rtg, VkRenderPass render_pass, uint32_t subpass) {
//...
{ //create pipeline:
//...
//all of the above structures get bundled together into one very large create_info:
VkGraphicsPipelineCreateInfo create_info{
.sType = VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO,
.stageCount = uint32_t(stages.size()),
.pStages = stages.data(),
.pVertexInputState = &vertex_input_state,
.pInputAssemblyState = &input_assembly_state,
.pViewportState = &viewport_state,
.pRasterizationState = &rasterization_state,
.pMultisampleState = &multisample_state,
.pDepthStencilState = &depth_stencil_state,
.pColorBlendState = &color_blend_state,
.pDynamicState = &dynamic_state,
.layout = layout,
.renderPass = render_pass,
.subpass = subpass,
};
VK( vkCreateGraphicsPipelines(rtg.device, VK_NULL_HANDLE, 1, &create_info, nullptr, &handle) );
}
}
You've conquered the longest stream of "just typing stuff" in the tutorial so far. Do a victory lap by de-allocating the shader modules you made at the top of the function:
Tutorial-BackgroundPipeline.cpp
void Tutorial::BackgroundPipeline::create(RTG &rtg, VkRenderPass render_pass, uint32_t subpass) {
//...
//modules no longer needed now that pipeline is created:
vkDestroyShaderModule(rtg.device, frag_module, nullptr);
vkDestroyShaderModule(rtg.device, vert_module, nullptr);
}
Pushing the Constants and Profiting
Two last things and you will have a nicely animated background.
First, in your render function, actually push the constants:
Tutorial
{ //draw with the background pipeline:
vkCmdBindPipeline(workspace.command_buffer, VK_PIPELINE_BIND_POINT_GRAPHICS, background_pipeline.handle);
{ //push time:
BackgroundPipeline::Push push{
.time = float(time),
};
vkCmdPushConstants(workspace.command_buffer, background_pipeline.layout, VK_SHADER_STAGE_FRAGMENT_BIT, 0, sizeof(push), &push);
}
vkCmdDraw(workspace.command_buffer, 3, 1, 0, 0);
}
Second, actually do something with time in your fragment shader:
background.frag
void main() {
outColor = vec4(fract(position.x + time), position.y, 0.0, 1.0);
}
Compile and run and you'll have the pattern sweeping left to right at 1Hz:
