Harness and Initialization

The only reference solution calls remaining to eliminate are those that are responsible for Vulkan initialization and the main loop that runs our code. This code is boilerplate; it's going to appear (in approximately this form) in any real-time graphics application built on Vulkan.

Swapchain Management

We just dealt with the application side of the swapchain in the last step. So let's work through the code on the harness side.

The relevant data is stored in the RTG structure:

in RTG.hpp
struct RTG {
//...
	VkSurfaceKHR surface = VK_NULL_HANDLE;
	VkSurfaceFormatKHR surface_format{};
	VkPresentModeKHR present_mode{};

	//-------------------------------------------------
	//Stuff used by 'run' to run the main loop (swapchain and workspaces):

	//The swapchain is the list of images that get rendered to and shown on the surface:
	VkSwapchainKHR swapchain = VK_NULL_HANDLE; //in non-headless mode, swapchain images are managed by this object; in headless mode this will be null

	VkExtent2D swapchain_extent = {.width = 0, .height = 0}; //current size of the swapchain
	std::vector< VkImage > swapchain_images; //images in the swapchain
	std::vector< VkImageView > swapchain_image_views; //image views of the images in the swapchain

//...
};

The surface (VkSurfaceKHR surface) is where the swapchain (VkSwapchainKHR swapchain) images will be presented (shown), and our code that recreates the swapchain will be called whenever the current swapchain is no longer compatible with the surface (because, e.g., the surface changed size).

The swapchain itself consists of a list of images, all of the same size (VkExtent2D swapchain_extent), which our code will fetch handles to into swapchain_images, and make views of in swapchain_image_views.

Making the Swapchain

Taking a look at the signature of the refsol:: code, you might get the feeling that we've got a few steps to get through here; and we do:

in RTG.cpp
void RTG::recreate_swapchain() {
	refsol::RTG_recreate_swapchain(
		configuration.debug,
		device,
		physical_device,
		surface,
		surface_format,
		present_mode,
		graphics_queue_family,
		present_queue_family,
		&swapchain,
		&swapchain_extent,
		&swapchain_images,
		&swapchain_image_views
	);
	//TODO: clean up swapchain if it already exists

	//TODO: determine size, image count, and transform for swapchain

	//TODO: make the swapchain

	//TODO: get the swapchain images

	//TODO: make image views
}

Our plan for this function is to discard the old swapchain (if it exists), create the new swapchain, and then extract handles for the swapchain images and make images views for them.

Destroy the Old Swapchain

Let's start by destroying the old swapchain (if needed):

in RTG.cpp
void RTG::recreate_swapchain() {
	//clean up swapchain if it already exists:
	if (!swapchain_images.empty()) {
		destroy_swapchain();
	}

	//...
}
We will, of course, need to update destroy_swapchain at some point, too.

A note on clean-up: you can more gracefully handle changing a swapchain by using the .oldSwapchain parameter of the swapchain creation function; for the purposes of this tutorial we're just going to do a slightly less efficient but nonetheless correct thing and destroy the swapchain and rebuild it from scratch.

Determine Swapchain Size (etc.)

Speaking of rebuilding the swapchain, we need to know some things about the surface we're going to be building the swapchain for before we can build it. So we'll add code to query the VkSurfaceCapabilitiesKHR of the surface and extract the data we need:

in RTG.cpp
void RTG::recreate_swapchain() {
	//...

	//determine size, image count, and transform for swapchain:
	VkSurfaceCapabilitiesKHR capabilities;
	VK( vkGetPhysicalDeviceSurfaceCapabilitiesKHR(physical_device, surface, &capabilities) );

	swapchain_extent = capabilities.currentExtent;

	uint32_t requested_count = capabilities.minImageCount + 1;
	if (capabilities.maxImageCount != 0) {
		requested_count = std::min(capabilities.maxImageCount, requested_count);
	}

	//...
}

We're determining three things here. The swapchain_extent is straightforward -- we just use the current size of the surface. Similarly, the transform alluded to in the comment is capabilities.currentTransform, which we just pass on to the swapchain creation function.

We set the number of images we'll ask for in the swapchain (requested_count) to one more than the minimum supported, but clamp it to the maximum supported count (which will only be non-zero if there is a defined maximum). Our intent here is that the minimum probably reflects the number of images that will be simultaneously in use by the presentation system plus one (since if all the images in the swapchain are in use by the presentation system, no images will be available to render into). We add one more to allow some amount of parallelism in rendering.

Create the Swapchain Size

With our parameters computed, we now can create the swapchain itself:

in RTG.cpp
void RTG::recreate_swapchain() {
	//...

	{ //create swapchain
		VkSwapchainCreateInfoKHR create_info{
			.sType = VK_STRUCTURE_TYPE_SWAPCHAIN_CREATE_INFO_KHR,
			.surface = surface,
			.minImageCount = requested_count,
			.imageFormat = surface_format.format,
			.imageColorSpace = surface_format.colorSpace,
			.imageExtent = swapchain_extent,
			.imageArrayLayers = 1,
			.imageUsage = VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT,
			.preTransform = capabilities.currentTransform,
			.compositeAlpha = VK_COMPOSITE_ALPHA_OPAQUE_BIT_KHR,
			.presentMode = present_mode,
			.clipped = VK_TRUE,
			.oldSwapchain = VK_NULL_HANDLE //NOTE: could be more efficient by passing old swapchain handle here instead of destroying it
		};

		std::vector< uint32_t > queue_family_indices{
			graphics_queue_family.value(),
			present_queue_family.value()
		};

		if (queue_family_indices[0] != queue_family_indices[1]) {
			//if images will be presented on a different queue, make sure they are shared:
			create_info.imageSharingMode = VK_SHARING_MODE_CONCURRENT;
			create_info.queueFamilyIndexCount = uint32_t(queue_family_indices.size());
			create_info.pQueueFamilyIndices = queue_family_indices.data();
		} else {
			create_info.imageSharingMode = VK_SHARING_MODE_EXCLUSIVE;
		}

		VK( vkCreateSwapchainKHR(device, &create_info, nullptr, &swapchain) );
	}

	//...
}

The only odd bit of this creation procedure is the code around queue indices. If the queue that images are presented on is different than the queue we're running graphics commands on, we need to set some extra parameters so the swapchain creation function knows to create images that are shared between queues. (We could, alternatively, do a queue family ownership transfer on the image after rendering and before presenting; by making the image shared here we save having to write that code.)

Get Image Handles

Creating the swapchain created a list of images, but we can't do anything with those images without VkImage handles. So our next chunk of code fetches those handles:

in RTG.cpp
void RTG::recreate_swapchain() {
	//...

	{ //get the swapchain images:
		uint32_t count = 0;
		VK( vkGetSwapchainImagesKHR(device, swapchain, &count, nullptr) );
		swapchain_images.resize(count);
		VK( vkGetSwapchainImagesKHR(device, swapchain, &count, swapchain_images.data()) );
	}

	//...
}

This "two-calls" pattern is common in the Vulkan API. Queries that return a variable-length array will return just the length if the data parameter is nullptr; so the first call is getting the array size and the second is actually fetching the contents.

Note, also, that these image handles are all owned by the swapchain; so we don't need to individually destroy them later.

Make Image Views

Vulkan code that accesses images generally does so through an image view (handle type: VkImageView). So we might as well create image views for all our swapchain images here.

in RTG.cpp
void RTG::recreate_swapchain() {
	//...

//create views for swapchain images:
	swapchain_image_views.assign(swapchain_images.size(), VK_NULL_HANDLE);
	for (size_t i = 0; i < swapchain_images.size(); ++i) {
		VkImageViewCreateInfo create_info{
			.sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO,
			.image = swapchain_images[i],
			.viewType = VK_IMAGE_VIEW_TYPE_2D,
			.format = surface_format.format,
			.components{
				.r = VK_COMPONENT_SWIZZLE_IDENTITY,
				.g = VK_COMPONENT_SWIZZLE_IDENTITY,
				.b = VK_COMPONENT_SWIZZLE_IDENTITY,
				.a = VK_COMPONENT_SWIZZLE_IDENTITY
			},
			.subresourceRange{
				.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT,
				.baseMipLevel = 0,
				.levelCount = 1,
				.baseArrayLayer = 0,
				.layerCount = 1
			},
		};
		VK( vkCreateImageView(device, &create_info, nullptr, &swapchain_image_views[i]) );
	}
}

At this point the code should compile and run, and your renderer should proceed as usual.

Extra: Print Debugging Information

We might as well report a little bit about the swapchain when it is created:

in RTG.cpp
void RTG::recreate_swapchain() {
//...

	if (configuration.debug) {
		std::cout << "Swapchain is now " << swapchain_images.size() << " images of size " << swapchain_extent.width << "x" << swapchain_extent.height << "." << std::endl;
	}
}

If you were curious, this might also be a good place to also print out information about the min and max image counts supported by the surface, the current present mode, or the current transformation of the surface.

Cleaning up the Swapchain

To destroy the swapchain, we run the creation steps in reverse:

in RTG.cpp
void RTG::destroy_swapchain() {
	refsol::RTG_destroy_swapchain(
		device,
		&swapchain,
		&swapchain_images,
		&swapchain_image_views
	);

	VK( vkDeviceWaitIdle(device) ); //wait for any rendering to old swapchain to finish

	//clean up image views referencing the swapchain:
	for (auto &image_view : swapchain_image_views) {
		vkDestroyImageView(device, image_view, nullptr);
		image_view = VK_NULL_HANDLE;
	}
	swapchain_image_views.clear();

	//forget handles to swapchain images (will destroy by deallocating the swapchain itself):
	swapchain_images.clear();

	//deallocate the swapchain and (thus) its images:
	if (swapchain != VK_NULL_HANDLE) {
		vkDestroySwapchainKHR(device, swapchain, nullptr);
		swapchain = VK_NULL_HANDLE;
	}
}

There are two subtleties to this code. The first is that we call vkDeviceWaitIdle to ensure that nothing is actively rendering to a swapchain image while we are freeing it. The second is that we don't need to call vkDestroyImage on the swapchain image handles, since these are owned by the swapchain itself. (We do need to destroy the image views, since we created those ourselves.)

After this replacement, the code should again compile and run properly. When testing, make sure to resize the window a few times so this code gets called repeatedly.

Running the Application

"But wait," you exclaim, "I've been running bin/main this whole time."

True enough. What the title of this section references is RTG::run, the "harness" that connects an RTG::Application (like Tutorial) to the windowing system and GPU.

To understand the job of the run function, take a look at the functions it will need to call on an application structure:

in RTG.hpp
	struct Application {
		//handle user input: (called when user interacts)
		virtual void on_input(InputEvent const &) = 0;

		//[re]create resources when swapchain is recreated: (called at start of run() and when window is resized)
		virtual void on_swapchain(RTG &, SwapchainEvent const &) = 0;

		//advance time for dt seconds: (called every frame)
		virtual void update(float dt) = 0;

		//queue commands to render a frame: (called every frame)
		virtual void render(RTG &, RenderParams const &) = 0;
	};
}

So RTG::run will need to capture events (provided by the GLFW window system interface layer); notify the application of swapchain changes; tell the application about elapsed time; and let the application know when it is time to render.

Let's sketch out the framework for these functions:

in RTG.cpp
void RTG::run(Application &application) {
	refsol::RTG_run(*this, application);
	//TODO: initial on_swapchain

	//TODO: setup event handling

	//TODO: setup time handling

	while (!glfwWindowShouldClose(window)) {
		//TODO: event handling

		//TODO: elapsed time handling

		//TODO: render handling (with on_swapchain as needed)
	}

	//TODO: tear down event handling
}

Notice that the core of our run function is a while loop that will run until GLFW lets us know the window should be closed via the glfwWindowShouldClose call.

Time Handling

The simplest thing that our run loop must do is let the application know about elapsed time. We can do this by using the functions provided by the standard library in the std::chrono namespace.

To start with, we establish a time point outside the loop:

in RTG.cpp
void RTG::run(Application &application) {
	//...

	//TODO: setup time handling
	std::chrono::high_resolution_clock::time_point before = std::chrono::high_resolution_clock::now();

	//...
}

Now, every trip through the loop, our code gets the current time, reports the difference to the application, and updates the value of before:

in RTG.cpp
void RTG::run(Application &application) {
	//...
	while (!glfwWindowShouldClose(window)) {
		//...

		{ //elapsed time handling:
			std::chrono::high_resolution_clock::time_point after = std::chrono::high_resolution_clock::now();
			float dt = float(std::chrono::duration< double >(after - before).count());
			before = after;

			dt = std::min(dt, 0.1f); //lag if frame rate dips too low

			application.update(dt);
		}

	//...
}

Notice the line dt = std::min(dt, 0.1f); -- this is here to make sure that frame times don't fall into a black hole if the application's update function starts doing updates slower than real-time. (To see why this might be a problem, consider what would happen without it if application.update(dt) started taking \( 2 dt \) seconds to return -- exponential update-time growth!)

Note that at this point the code will compile and run, but you won't be able to close the window (at least on some platforms) because we're not letting GLFW check for events.

Event Handling

GLFW already converts events into platform-neutral callback parameters; our code just needs to capture these and translate them into InputEvents for consumption by the application code.

First, a quick look at the InputEvent union:

in InputEvent.hpp
union InputEvent {
	enum Type : uint32_t {
		MouseMotion,
		MouseButtonDown,
		MouseButtonUp,
		MouseWheel,
		KeyDown,
		KeyUp
	} type;
	struct MouseMotion {
		Type type;
		float x, y; //in (possibly fractional) swapchain pixels from upper left of image
		uint8_t state; //all mouse button states, bitfield of (1 << GLFW_MOUSE_BUTTON_*)
	} motion;
	struct MouseButton {
		Type type;
		float x, y; //in (possibly fractional) swapchain pixels from upper left of image
		uint8_t state; //all mouse button states, bitfield of (1 << GLFW_MOUSE_BUTTON_*)
		uint8_t button; //one of the GLFW_MOUSE_BUTTON_* values
		uint8_t mods; //bitfield of modifier keys (GLFW_MOD_* values)
	} button;
	struct MouseWheel {
		Type type;
		float x, y; //scroll offset; +x right, +y up(?); from glfw scroll callback
	} wheel;
	struct KeyInput {
		Type type;
		int key; //GLFW_KEY_* codes
		int mods; //GLFW_MOD_* bits
	} key;
};
After we finish this part of the code, we can finally disregard the warning comment in this header!

The crucial thing to notice is that this is a union, not a struct -- this means that the members are all overlapped in memory. We use the Type type value (which is a member of every branch of the union) to determine which branch of the union is valid to access. A more C++-y way to do the same thing would be to use a std::variant.

Regardless, let's get our event handling set up by making a vector in which to store events and installing event handlers to receive the events from GLFW:

in RTG.cpp
void RTG::run(Application &application) {
	//...

	//setup event handling:
	std::vector< InputEvent > event_queue;
	glfwSetWindowUserPointer(window, &event_queue);

	glfwSetCursorPosCallback(window, cursor_pos_callback);
	glfwSetMouseButtonCallback(window, mouse_button_callback);
	glfwSetScrollCallback(window, scroll_callback);
	glfwSetKeyCallback(window, key_callback);

	//...
}

Notice, also, the use of GLFW's window user pointer to pass a pointer to our event queue into the handlers.

Of course, we should definitely unset this user pointer and the event handlers at the end of the loop:

in RTG.cpp
void RTG::run(Application &application) {
	//...

	//tear down event handling:
	glfwSetMouseButtonCallback(window, nullptr);
	glfwSetCursorPosCallback(window, nullptr);
	glfwSetScrollCallback(window, nullptr);
	glfwSetKeyCallback(window, nullptr);

	glfwSetWindowUserPointer(window, nullptr);
}

Then, during the loop itself we can tell GLFW to read events from the windowing system (which will call our callbacks), then drain our event queue into the application's event handling function:

in RTG.cpp
void RTG::run(Application &application) {
	//...
	while (!glfwWindowShouldClose(window)) {
		//event handling:
		glfwPollEvents();

		//deliver all input events to application:
		for (InputEvent const &input : event_queue) {
			application.on_input(input);
		}
		event_queue.clear();
		
		//...
	}
	//...
}

Which reminds me we should probably write those event handlers. I put these in RTG.cpp just above the run function, and declare them static so the symbol doesn't get exported where it might collide with functions in other files.

The cursor position callback generates a mouse motion event. It needs to do a bit of extra work to generate the bitmask of currently-pressed buttons.

in RTG.cpp
//above RTG::run:
static void cursor_pos_callback(GLFWwindow *window, double xpos, double ypos) {
	std::vector< InputEvent > *event_queue = reinterpret_cast< std::vector< InputEvent > * >(glfwGetWindowUserPointer(window));
	if (!event_queue) return;

	InputEvent event;
	std::memset(&event, '\0', sizeof(event));

	event.type = InputEvent::MouseMotion;
	event.motion.x = float(xpos);
	event.motion.y = float(ypos);
	event.motion.state = 0;
	for (int b = 0; b < 8 && b < GLFW_MOUSE_BUTTON_LAST; ++b) {
		if (glfwGetMouseButton(window, b) == GLFW_PRESS) {
			event.motion.state |= (1 << b);
		}
	}

	event_queue->emplace_back(event);
}

The memset here is to make sure that any parts of the union we don't write are in a known (and boring) state.

The mouse button callback is similar, with some added code to convert the press and release actions to appropriate Type values:

in RTG.cpp
//above RTG::run:
static void mouse_button_callback(GLFWwindow *window, int button, int action, int mods) {
	std::vector< InputEvent > *event_queue = reinterpret_cast< std::vector< InputEvent > * >(glfwGetWindowUserPointer(window));
	if (!event_queue) return;

	InputEvent event;
	std::memset(&event, '\0', sizeof(event));

	if (action == GLFW_PRESS) {
		event.type = InputEvent::MouseButtonDown;
	} else if (action == GLFW_RELEASE) {
		event.type = InputEvent::MouseButtonUp;
	} else {
		std::cerr << "Strange: unknown mouse button action." << std::endl;
		return;
	}

	double xpos, ypos;
	glfwGetCursorPos(window, &xpos, &ypos);
	event.button.x = float(xpos);
	event.button.y = float(ypos);
	event.button.state = 0;
	for (int b = 0; b < 8 && b < GLFW_MOUSE_BUTTON_LAST; ++b) {
		if (glfwGetMouseButton(window, b) == GLFW_PRESS) {
			event.button.state |= (1 << b);
		}
	}
	event.button.button = uint8_t(button);
	event.button.mods = uint8_t(mods);

	event_queue->emplace_back(event);
}

The scroll callback is probably the most straightforward of the bunch -- it just copies parameters into the event structure:

in RTG.cpp
//above RTG::run:
static void scroll_callback(GLFWwindow *window, double xoffset, double yoffset) {
	std::vector< InputEvent > *event_queue = reinterpret_cast< std::vector< InputEvent > * >(glfwGetWindowUserPointer(window));
	if (!event_queue) return;

	InputEvent event;
	std::memset(&event, '\0', sizeof(event));

	event.type = InputEvent::MouseWheel;
	event.wheel.x = float(xoffset);
	event.wheel.y = float(yoffset);

	event_queue->emplace_back(event);
}

And the keyboard callback does a bit of type-determining but otherwise just copies values:

in RTG.cpp
//above RTG::run:
static void key_callback(GLFWwindow *window, int key, int scancode, int action, int mods) {
	std::vector< InputEvent > *event_queue = reinterpret_cast< std::vector< InputEvent > * >(glfwGetWindowUserPointer(window));
	if (!event_queue) return;

	InputEvent event;
	std::memset(&event, '\0', sizeof(event));

	if (action == GLFW_PRESS) {
		event.type = InputEvent::KeyDown;
	} else if (action == GLFW_RELEASE) {
		event.type = InputEvent::KeyUp;
	} else if (action == GLFW_REPEAT) {
		//ignore repeats
		return;
	} else {
		std::cerr << "Strange: got unknown keyboard action." << std::endl;
	}

	event.key.key = key;
	event.key.mods = mods;

	event_queue->emplace_back(event);
}

I chose to have the code ignore key repeats because they are almost never what you want to use in an interactive application. (Key repeats behave very differently across computers and OSs, and users almost never know how to change the behavior.)

With these event callbacks defined, the code will compile and run (and you'll be able to close the window!); but it still won't show anything.

Render and Swapchain Handling

We are going to tackle the final two functions of the application together, for reasons that will soon become evident.

First, we need to call Application::on_swapchain before the loop starts to give the application the chance to create framebuffers for the current state of the swapchain. We'll wrap this call in a lambda because we'll need it again in a few more places later in this function. (But be sure to also call it here!)

in RTG.cpp
void RTG::run(Application &application) {
	//TODO: initial on_swapchain
	auto on_swapchain = [&,this]() {
		application.on_swapchain(*this, SwapchainEvent{
			.extent = swapchain_extent,
			.images = swapchain_images,
			.image_views = swapchain_image_views,
		});
	};
	on_swapchain();

	//...
}

On to the code in the loop. To support rendering, we need to do four things: get a workspace that isn't being used, acquire an image to render into, let the application use the workspace to render into the image, and (finally) hand that image to Vulkan for presentation. This is complicated by the fact that both acquiring and presenting an image may trigger swapchain recreation.

Let's sketch the code out:

in RTG.cpp
void RTG::run(Application &application) {
	//...
	while (!glfwWindowShouldClose(window)) {
		//...
		//TODO: render handling (with on_swapchain as needed)
		//TODO: acquire a workspace

		//TODO: acquire an image (resize swapchain if needed)

		//TODO: queue rendering work

		//TODO: present image (resize swapchain if needed)

		//...
	}
	//...
}

Acquiring the Workspace

Acquiring a workspace means getting a set of buffers that aren't being used in a current rendering operation. How do we know a workspace isn't being used? Each workspace has an associated workspace_available fence, which is signaled when the rendering work on this workspace is done.

So our workspace acquisition code just pulls out the next workspace in the list and then waits until any rendering work that is using it finishes:

in RTG.cpp
void RTG::run(Application &application) {
	//...
	while (!glfwWindowShouldClose(window)) {
		//...

		uint32_t workspace_index;
		{ //acquire a workspace:
			assert(next_workspace < workspaces.size());
			workspace_index = next_workspace;
			next_workspace = (next_workspace + 1) % workspaces.size();

			//wait until the workspace is not being used:
			VK( vkWaitForFences(device, 1, &workspaces[workspace_index].workspace_available, VK_TRUE, UINT64_MAX) );

			//mark the workspace as in use:
			VK( vkResetFences(device, 1, &workspaces[workspace_index].workspace_available) );
		}

		//...
	}
	//...
}

By the way, this workspace_available fence is exactly the one our render function passes to vkQueueSubmit to signal when the rendering work is done.

Acquiring an Image

Since images are owned by the swapchain (and, generally, the window system interface layer), we acquire them through a call -- vkAcquireNextImageKHR -- to that layer. Unfortunately, the call can fail in two ways we need to handle gracefully, so we can't use the VK macro here.

in RTG.cpp
void RTG::run(Application &application) {
	//...
	while (!glfwWindowShouldClose(window)) {
		//...

		uint32_t image_index = -1U;
		//acquire an image:
retry:
		//Ask the swapchain for the next image index -- note careful return handling:
		if (VkResult result = vkAcquireNextImageKHR(device, swapchain, UINT64_MAX, workspaces[workspace_index].image_available, VK_NULL_HANDLE, &image_index);
		    result == VK_ERROR_OUT_OF_DATE_KHR) {
			//if the swapchain is out-of-date, recreate it and run the loop again:
			std::cerr << "Recreating swapchain because vkAcquireNextImageKHR returned " << string_VkResult(result) << "." << std::endl;
			
			recreate_swapchain();
			on_swapchain();

			goto retry;
		} else if (result == VK_SUBOPTIMAL_KHR) {
			//if the swapchain is suboptimal, render to it and recreate it later:
			std::cerr << "Suboptimal swapchain format -- ignoring for the moment." << std::endl;
		} else if (result != VK_SUCCESS) {
			//other non-success results are genuine errors:
			throw std::runtime_error("Failed to acquire swapchain image (" + std::string(string_VkResult(result)) + ")!");
		}

		//...
	}
	//...
}

Notice that we also pass a VkSemaphore to the vkAcquireNextImageKHR function. It will queue work to signal this semaphore when the swapchain image with the returned index is ready to be rendered to. Returning early like this allows render preparation code to proceed in parallel with the image finishing presentation. (Recall, also, how we added code to wait on this semaphore to our queue submit code in Tutorial::render!)

Also, yes, I snuck a goto into the code. If you are foundationally opposed, feel free to try another structure. I've tried a few and I think this particular one is no worse.

Calling the Render Function

With workspace and image in hand, our code now assembles a RenderParams parameter structure and hands it to the application:

in RTG.cpp
void RTG::run(Application &application) {
	//...
	while (!glfwWindowShouldClose(window)) {
		//...

		//call render function:
		application.render(*this, RenderParams{
			.workspace_index = workspace_index,
			.image_index = image_index,
			.image_available = workspaces[workspace_index].image_available,
			.image_done = workspaces[workspace_index].image_done,
			.workspace_available = workspaces[workspace_index].workspace_available,
		});

		//...
	}
	//...
}

Presenting the Image

The present function needs a semaphore to know when the rendering work is done. Thankfully, we passed one (image_done) to the rendering function to signal in just this circumstance.

Also note that we -- again -- need to do some careful return value handling.

in RTG.cpp
void RTG::run(Application &application) {
	//...
	while (!glfwWindowShouldClose(window)) {
		//...

		{ //queue the work for presentation:
			VkPresentInfoKHR present_info{
				.sType = VK_STRUCTURE_TYPE_PRESENT_INFO_KHR,
				.waitSemaphoreCount = 1,
				.pWaitSemaphores = &workspaces[workspace_index].image_done,
				.swapchainCount = 1,
				.pSwapchains = &swapchain,
				.pImageIndices = &image_index,
			};

			assert(present_queue);

			//note, again, the careful return handling:
			if (VkResult result = vkQueuePresentKHR(present_queue, &present_info);
			    result == VK_ERROR_OUT_OF_DATE_KHR || result == VK_SUBOPTIMAL_KHR) {
				std::cerr << "Recreating swapchain because vkQueuePresentKHR returned " << string_VkResult(result) << "." << std::endl;
				recreate_swapchain();
				on_swapchain();
			} else if (result != VK_SUCCESS) {
				throw std::runtime_error("failed to queue presentation of image (" + std::string(string_VkResult(result)) + ")!");
			}
		}
	}
	//...
}

And with that we should have the code compiling and drawing things again! Let's celebrate by removing that warning comment from the input event header:

in InputEvent.hpp
// *********************************************************
// *                                                       *
// * WARNING:                                              *
// *                                                       *
// *    Editing this structure will break refsol::RTG_run  *
// *                                                       *
// *********************************************************

Admittedly, it's still a true comment, we just aren't using RTG_run in our code any longer.

Workspace Wrangling

Since we just got done with the run-loop code that uses the synchronization primitives in RTG::PerWorkspace we might as well write the code to create and destroy those primitives.

A quick reminder of the per-workspace data we need to create and destroy:

in RTG.hpp
//in struct RTG
	//Workspaces hold dynamic state that must be kept separate between frames.
	// RTG stores some synchronization primitives per workspace.
	// (The bulk of per-workspace data will be managed by the Application.)
	struct PerWorkspace {
		VkFence workspace_available = VK_NULL_HANDLE; //workspace is ready for a new render
		VkSemaphore image_available = VK_NULL_HANDLE; //the image is ready to write to
		VkSemaphore image_done = VK_NULL_HANDLE; //the image is done being written to
	};
	std::vector< PerWorkspace > workspaces;

Two VkSemaphores and a VkFence. Not too bad.

Workspace Creation

We create the fence and semaphores with -- as you would expect -- fence and semaphore creation functions.

The only subtlety here is that we pass the VK_FENCE_CREATE_SIGNALED_BIT as a flag when creating the fence, since our run-loop code waits on the fence to make sure the workspace is available. If the fences didn't start signalled, the run-loop code would wait forever when trying to acquire the next workspace.

in RTG.cpp
//in RTG:RTG:
	//create workspace resources:
	workspaces.resize(configuration.workspaces);
	for (auto &workspace : workspaces) {
		refsol::RTG_constructor_per_workspace(device, &workspace);
		{ //create workspace fences:
			VkFenceCreateInfo create_info{
				.sType = VK_STRUCTURE_TYPE_FENCE_CREATE_INFO,
				.flags = VK_FENCE_CREATE_SIGNALED_BIT, //start signaled, because all workspaces are available to start
			};

			VK( vkCreateFence(device, &create_info, nullptr, &workspace.workspace_available) );
		}

		{ //create workspace semaphores:
			VkSemaphoreCreateInfo create_info{
				.sType = VK_STRUCTURE_TYPE_SEMAPHORE_CREATE_INFO,
			};

			VK( vkCreateSemaphore(device, &create_info, nullptr, &workspace.image_available) );
			VK( vkCreateSemaphore(device, &create_info, nullptr, &workspace.image_done) );
		}
	}

If you want to spice things up a bit, consider restructuring the loop so you can use the same two create_info structures for all workspaces.

Workspace Destruction

For every Create there must be a Destroy.

in RTG.cpp
//in RTG::~RTG:
	//destroy workspace resources:
	for (auto &workspace : workspaces) {
		refsol::RTG_destructor_per_workspace(device, &workspace);
		if (workspace.workspace_available != VK_NULL_HANDLE) {
			vkDestroyFence(device, workspace.workspace_available, nullptr);
			workspace.workspace_available = VK_NULL_HANDLE;
		}
		if (workspace.image_available != VK_NULL_HANDLE) {
			vkDestroySemaphore(device, workspace.image_available, nullptr);
			workspace.image_available = VK_NULL_HANDLE;
		}
		if (workspace.image_done != VK_NULL_HANDLE) {
			vkDestroySemaphore(device, workspace.image_done, nullptr);
			workspace.image_done = VK_NULL_HANDLE;
		}
	}
	workspaces.clear();

The compile should, again, compile and work without complaint. Two more refsol:: functions done.

Initializing Vulkan

Finally, we come to the end of our tutorial. And what better way to end than at the outermost resource scope -- the code that actually sets up and tears down Vulkan.

This code is primarily concerned with five items: the Vulkan instance (VkInstance instance) -- the handle to the library; the physical device (VkPhysicalDevice physical_device) -- the handle to the GPU; the logical device (VkDevice device -- the handle to our code's view of the GPU; the window (GLFWWindow *window -- the handle to the window our code is showing output in (managed by GLFW); and the surface (VkSurfaceKHR surface -- Vulkan's view of the part of the window that shows our graphics.

Tearing down Vulkan

We'll start by writing the code that tears down each of these items:

in RTG.cpp
//in RTG::~RTG:
	//destroy the rest of the resources:
	refsol::RTG_destructor( &device, &surface, &window, &debug_messenger, &instance );

	if (device != VK_NULL_HANDLE) {
		vkDestroyDevice(device, nullptr);
		device = VK_NULL_HANDLE;
	}

	if (surface != VK_NULL_HANDLE) {
		vkDestroySurfaceKHR(instance, surface, nullptr);
		surface = VK_NULL_HANDLE;
	}

	if (window != nullptr) {
		glfwDestroyWindow(window);
		window = nullptr;
	}

	if (debug_messenger != VK_NULL_HANDLE) {
		PFN_vkDestroyDebugUtilsMessengerEXT vkDestroyDebugUtilsMessengerEXT = (PFN_vkDestroyDebugUtilsMessengerEXT)vkGetInstanceProcAddr(instance, "vkDestroyDebugUtilsMessengerEXT");
		if (vkDestroyDebugUtilsMessengerEXT) {
			vkDestroyDebugUtilsMessengerEXT(instance, debug_messenger, nullptr);
			debug_messenger = VK_NULL_HANDLE;
		}
	}

	if (instance != VK_NULL_HANDLE) {
		vkDestroyInstance(instance, nullptr);
		instance = VK_NULL_HANDLE;
	}
}

The debug_messenger holds information about the callback function that we've been using to get information from the validation layer. We'll create this structure along with the Vulkan instance.

Creating a Vulkan Instance

The first thing any Vulkan code needs to do is create a Vulkan instance. This is the handle that you use to access the rest of the library.

When creating an instance, you also tell Vulkan about the extensions and layers you want to use. Extensions are extra functionality that your Vulkan driver adds to support certain use-cases or extra hardware features; the Window System Interface that we've been using to create swapchains and present images consists of several(!) extensions. Layers are extra functionality provided by libraries that sit between your code and the driver's Vulkan implementation; the validation layer that provides so much nice debugging information is one we've already been using.

Let's start by sketching out the creation function. As you can see, we've got a bunch of extensions and layers to add into the respective lists.

in RTG.cpp
//in RTG::RTG:
	{ //create the `instance` (main handle to Vulkan library):
	refsol::RTG_constructor_create_instance(
		configuration.application_info,
		configuration.debug,
		&instance,
		&debug_messenger
	);
		VkInstanceCreateFlags instance_flags = 0;
		std::vector< const char * > instance_extensions;
		std::vector< const char * > instance_layers;

		//TODO: add extensions for MoltenVK portability layer on macOS

		//TODO: add extensions and layers for debugging

		//TODO: add extensions needed by glfw

		//TODO: write debug messenger structure

		VkInstanceCreateInfo create_info{
			.sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO,
			.pNext = nullptr, //TODO: pass debug structure if configured
			.flags = instance_flags,
			.pApplicationInfo = &configuration.application_info,
			.enabledLayerCount = uint32_t(instance_layers.size()),
			.ppEnabledLayerNames = instance_layers.data(),
			.enabledExtensionCount = uint32_t(instance_extensions.size()),
			.ppEnabledExtensionNames = instance_extensions.data()
		};
		VK( vkCreateInstance(&create_info, nullptr, &instance) );

		//TODO: create debug messenger
	}

We request extensions and layers by adding a pointer to a string containing the name of the extension to the lists that get passed to the creation function. For extensions, there are helpful #defines for the names (so you can avoid typos); for layers you will need to type carefully.

If we request an extension or layer that is not available, instance creation will fail.

First, the portability layer extensions (which are only needed on macOS). These allow our app to work on macOS through the MoltenVK translation layer between Vulkan and Metal. If you aren't on an Apple system, add them anyway since you may want to run your code on a mac at some point.

in RTG.cpp
//in RTG::RTG:
	//add extensions for MoltenVK portability layer on macOS
	#if defined(__APPLE__)
	instance_flags |= VK_INSTANCE_CREATE_ENUMERATE_PORTABILITY_BIT_KHR;

	instance_extensions.emplace_back(VK_KHR_PORTABILITY_ENUMERATION_EXTENSION_NAME);
	instance_extensions.emplace_back(VK_KHR_SURFACE_EXTENSION_NAME);
	instance_extensions.emplace_back(VK_EXT_METAL_SURFACE_EXTENSION_NAME);
	#endif

(Yes, I'm skipping the debug extensions for the moment, we'll handle those below.)

Next, we'll handle the extensions needed for GLFW (the library we're using to get a window). GLFW provides two relevant function calls for this process -- glfwVulkanSupported, which tells us if the GLFW version we're using can actually do things with Vulkan; and glfwGetRequiredInstanceExtensions, which returns an array of extensions that GLFW wants.

in RTG.cpp
//in RTG::RTG:
	{ //add extensions needed by glfw:
		glfwInit();
		if (!glfwVulkanSupported()) {
			throw std::runtime_error("GLFW reports Vulkan is not supported.");
		}

		uint32_t count;
		const char **extensions = glfwGetRequiredInstanceExtensions(&count);
		if (extensions == nullptr) {
			throw std::runtime_error("GLFW failed to return a list of requested instance extensions. Perhaps it was not compiled with Vulkan support.");
		}
		for (uint32_t i = 0; i < count; ++i) {
			instance_extensions.emplace_back(extensions[i]);
		}
	}

The Debug Messenger

For debugging, we're using both the debug utils extension (which allows us to get debug messages delivered to a callback of our choosing), and the validation layer (which will check that our Vulkan usage comports with the specification).

in RTG.cpp
//in RTG::RTG:
	//add extensions and layers for debugging:
	if (configuration.debug) {
		instance_extensions.emplace_back(VK_EXT_DEBUG_UTILS_EXTENSION_NAME);
		instance_layers.emplace_back("VK_LAYER_KHRONOS_validation");
	}

The debug utils extension allows you to have debugging messages delivered to a custom logging function. Let's write one (above RTG::RTG):

in RTG.cpp
//above RTG::RTG:
static VKAPI_ATTR VkBool32 VKAPI_CALL debug_callback(
	VkDebugUtilsMessageSeverityFlagBitsEXT severity,
	VkDebugUtilsMessageTypeFlagsEXT type,
	const VkDebugUtilsMessengerCallbackDataEXT *data,
	void *user_data
) {
	if (severity & VK_DEBUG_UTILS_MESSAGE_SEVERITY_ERROR_BIT_EXT) {
		std::cerr << "\x1b[91m" << "E: ";
	} else if (severity & VK_DEBUG_UTILS_MESSAGE_SEVERITY_WARNING_BIT_EXT) {
		std::cerr << "\x1b[33m" << "w: ";
	} else if (severity & VK_DEBUG_UTILS_MESSAGE_SEVERITY_INFO_BIT_EXT) {
		std::cerr << "\x1b[90m" << "i: ";
	} else { //VERBOSE
		std::cerr << "\x1b[90m" << "v: ";
	}
	std::cerr << data->pMessage << "\x1b[0m" << std::endl;

	return VK_FALSE;
}

Note that the \x1b[... parts of the strings are ANSI escape codes; these escape codes will ensure that compliant terminals print our error logging messages in color. (If your error messages aren't in color, it is probably the case that you are running in Windows on an old build that doesn't support ANSI color in all terminals, or on a codepage that causes the escape sequence to be interpreted as something else. You can fix the codepage thing by including a manifest; see this example build command and manifest.)

To install a debug message handler you call -- unsurprisingly -- a create function. But the create function requires an instance. So how do you get debug messages back from the instance create function?

The debug utils extension provides an interesting hack to work around this problem: you can pass the create info you would use to install the debug handler as part of the pNext chain of your instance create info structure.

So let's define the debug messenger create info structure now:

in RTG.cpp
//in RTG::RTG:
	//TODO: write debug messenger structure
	VkDebugUtilsMessengerCreateInfoEXT debug_messenger_create_info{
		.sType = VK_STRUCTURE_TYPE_DEBUG_UTILS_MESSENGER_CREATE_INFO_EXT,
		.messageSeverity =
			VK_DEBUG_UTILS_MESSAGE_SEVERITY_VERBOSE_BIT_EXT
			| VK_DEBUG_UTILS_MESSAGE_SEVERITY_INFO_BIT_EXT
			| VK_DEBUG_UTILS_MESSAGE_SEVERITY_WARNING_BIT_EXT
			| VK_DEBUG_UTILS_MESSAGE_SEVERITY_ERROR_BIT_EXT,
		.messageType =
			VK_DEBUG_UTILS_MESSAGE_TYPE_GENERAL_BIT_EXT
			| VK_DEBUG_UTILS_MESSAGE_TYPE_VALIDATION_BIT_EXT
			| VK_DEBUG_UTILS_MESSAGE_TYPE_PERFORMANCE_BIT_EXT,
		.pfnUserCallback = debug_callback,
		.pUserData = nullptr
	};

And we can pass it to instance creation if debugging is enabled:

in RTG.cpp
//in RTG::RTG:
	VkInstanceCreateInfo create_info{
		.sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO,
		.pNext = (configuration.debug ? &debug_messenger_create_info : nullptr),
		.flags = instance_flags,
		.pApplicationInfo = &configuration.application_info,
		.enabledLayerCount = uint32_t(instance_layers.size()),
		.ppEnabledLayerNames = instance_layers.data(),
		.enabledExtensionCount = uint32_t(instance_extensions.size()),
		.ppEnabledExtensionNames = instance_extensions.data()
	};

And, once the instance is created, we can call the debug messenger creation function:

in RTG.cpp
//in RTG::RTG:
	//create debug messenger
	if (configuration.debug) {
		PFN_vkCreateDebugUtilsMessengerEXT vkCreateDebugUtilsMessengerEXT = (PFN_vkCreateDebugUtilsMessengerEXT)vkGetInstanceProcAddr(instance, "vkCreateDebugUtilsMessengerEXT");
		if (!vkCreateDebugUtilsMessengerEXT) {
			throw std::runtime_error("Failed to lookup debug utils create fn.");
		}
		VK( vkCreateDebugUtilsMessengerEXT(instance, &debug_messenger_create_info, nullptr, &debug_messenger) );
	}

Notice that, because vkCreateDebugUtilsMessengerEXT is an extension function, we need to get its address dynamically using vkGetInstanceProcAddr before we can call it. This is likely to be a familiar bit of code if you have ever used OpenGL extensions.

At this point, your code should compile and run as before (with exactly the same amount of debug complaints).

Creating a Surface

The process of creating a window and surface are different on each windowing system. GLFW abstracts the process to a pair of calls:

in RTG.cpp
//in RTG::RTG:
	{ //create the `window` and `surface` (where things get drawn):
	refsol::RTG_constructor_create_surface(
		configuration.application_info,
		configuration.debug,
		configuration.surface_extent,
		instance,
		&window,
		&surface
	);
		glfwWindowHint(GLFW_CLIENT_API, GLFW_NO_API);

		window = glfwCreateWindow(configuration.surface_extent.width, configuration.surface_extent.height, configuration.application_info.pApplicationName, nullptr, nullptr);

		if (!window) {
			throw std::runtime_error("GLFW failed to create a window.");
		}

		VK( glfwCreateWindowSurface(instance, window, nullptr, &surface) );
	}

Selecting a Physical Device

To select a physical device, our code will look through all of the available devices and pick the best one (more on that in a moment):

in RTG.cpp
//in RTG::RTG:
	{ //select the `physical_device` -- the gpu that will be used to draw:
	refsol::RTG_constructor_select_physical_device(
		configuration.debug,
		configuration.physical_device_name,
		instance,
		&physical_device
	);
		std::vector< std::string > physical_device_names; //for later error message
		{ //pick a physical device
			//TODO
		}

		if (physical_device == VK_NULL_HANDLE) {
			//TODO: report error
		}

		{ //report device name:
			VkPhysicalDeviceProperties properties;
			vkGetPhysicalDeviceProperties(physical_device, &properties);
			std::cout << "Selected physical device '" << properties.deviceName << "'." << std::endl;
		}
	}

To pick the best physical device, our code will either (a) look for a name matching the configuration, if one was specified or (b) look for a device with a high "score" for a simple scoring function:

in RTG.cpp
//in RTG::RTG:
{ //pick a physical device
	//TODO
	uint32_t count = 0;
	VK( vkEnumeratePhysicalDevices(instance, &count, nullptr) );
	std::vector< VkPhysicalDevice > physical_devices(count);
	VK( vkEnumeratePhysicalDevices(instance, &count, physical_devices.data()) );

	uint32_t best_score = 0;

	for (auto const &pd : physical_devices) {
		VkPhysicalDeviceProperties properties;
		vkGetPhysicalDeviceProperties(pd, &properties);

		VkPhysicalDeviceFeatures features;
		vkGetPhysicalDeviceFeatures(pd, &features);

		physical_device_names.emplace_back(properties.deviceName);

		if (!configuration.physical_device_name.empty()) {
			if (configuration.physical_device_name == properties.deviceName) {
				if (physical_device) {
					std::cerr << "WARNING: have two physical devices with the name '" << properties.deviceName << "'; using the first to be enumerated." << std::endl;
				} else {
					physical_device = pd;
				}
			}
		} else {
			uint32_t score = 1;
			if (properties.deviceType == VK_PHYSICAL_DEVICE_TYPE_DISCRETE_GPU) {
				score += 0x8000;
			}

			if (score > best_score) {
				best_score = score;
				physical_device = pd;
			}
		}
	}
}

As you can see, the scoring function just looks for any discrete GPU. You might -- at some point -- want to refine this to look for specific features of interest.

Finally, we add code to print a nice error message if no GPUs are found:

in RTG.cpp
//in RTG::RTG:
if (physical_device == VK_NULL_HANDLE) {
	//TODO: report error
	std::cerr << "Physical devices:\n";
	for (std::string const &name : physical_device_names) {
		std::cerr << "    " << name << "\n";
	}
	std::cerr.flush();

	if (!configuration.physical_device_name.empty()) {
		throw std::runtime_error("No physical device with name '" + configuration.physical_device_name + "'.");
	} else {
		throw std::runtime_error("No suitable GPU found.");
	}
}

Selecting a Surface Format and Presentation Mode

Now that we have a physical device and surface our code can determine what surface formats (i.e., storage format and color space) and present modes are supported by this combination.

in RTG.cpp
//in RTG::RTG:
	{ //select the `surface_format` and `present_mode` which control how colors are represented on the surface and how new images are supplied to the surface:
	refsol::RTG_constructor_select_format_and_mode(
		configuration.debug,
		configuration.surface_formats,
		configuration.present_modes,
		physical_device,
		surface,
		&surface_format,
		&present_mode
	);
		std::vector< VkSurfaceFormatKHR > formats;
		std::vector< VkPresentModeKHR > present_modes;
		
		{
			uint32_t count = 0;
			VK( vkGetPhysicalDeviceSurfaceFormatsKHR(physical_device, surface, &count, nullptr) );
			formats.resize(count);
			VK( vkGetPhysicalDeviceSurfaceFormatsKHR(physical_device, surface, &count, formats.data()) );
		}

		{
			uint32_t count = 0;
			VK( vkGetPhysicalDeviceSurfacePresentModesKHR(physical_device, surface, &count, nullptr) );
			present_modes.resize(count);
			VK( vkGetPhysicalDeviceSurfacePresentModesKHR(physical_device, surface, &count, present_modes.data()) );
		}

		//find first available surface format matching config:
		surface_format = [&](){
			for (auto const &config_format : configuration.surface_formats) {
				for (auto const &format : formats) {
					if (config_format.format == format.format && config_format.colorSpace == format.colorSpace) {
						return format;
					}
				}
			}
			throw std::runtime_error("No format matching requested format(s) found.");
		}();

		//find first available present mode matching config:
		present_mode = [&](){
			for (auto const &config_mode : configuration.present_modes) {
				for (auto const &mode : present_modes) {
					if (config_mode == mode) {
						return mode;
					}
				}
			}
			throw std::runtime_error("No present mode matching requested mode(s) found.");
		}();
	}

This is also a good place to write some debug code to list all the supported surface formats and present modes.

Creating the Logical Device

Finally, we can create the logical device -- the root of all our application-specific Vulkan resources. Part of creating the logical device is also getting handles to the queues we will be submitting work on.

So let's sketch that process out:

in RTG.cpp
//in RTG::RTG:
	{ //create the `device` (logical interface to the GPU) and the `queue`s to which we can submit commands:
	refsol::RTG_constructor_create_device(
		configuration.debug,
		physical_device,
		surface,
		&device,
		&graphics_queue_family,
		&graphics_queue,
		&present_queue_family,
		&present_queue
	);
		{ //look up queue indices:
			//TODO
		}

		//select device extensions:
		std::vector< const char * > device_extensions;
		//TODO

		{ //create the logical device:
			//TODO
		}
	}

Queues come from various families. We need to find a queue family (index) that supports graphics, and one that we can use to present on the supplied surface. So we write code to list all of the queue families and check for the desired properties on each:

in RTG.cpp
//in RTG::RTG:
{ //look up queue indices:
	//TODO
	uint32_t count = 0;
	vkGetPhysicalDeviceQueueFamilyProperties(physical_device, &count, nullptr);
	std::vector< VkQueueFamilyProperties > queue_families(count);
	vkGetPhysicalDeviceQueueFamilyProperties(physical_device, &count, queue_families.data());

	for (auto const &queue_family : queue_families) {
		uint32_t i = uint32_t(&queue_family - &queue_families[0]);

		//if it does graphics, set the graphics queue family:
		if (queue_family.queueFlags & VK_QUEUE_GRAPHICS_BIT) {
			if (!graphics_queue_family) graphics_queue_family = i;
		}

		//if it has present support, set the present queue family:
		VkBool32 present_support = VK_FALSE;
		VK( vkGetPhysicalDeviceSurfaceSupportKHR(physical_device, i, surface, &present_support) );
		if (present_support == VK_TRUE) {
			if (!present_queue_family) present_queue_family = i;
		}
	}

	if (!graphics_queue_family) {
		throw std::runtime_error("No queue with graphics support.");
	}

	if (!present_queue_family) {
		throw std::runtime_error("No queue with present support.");
	}
}

Note that the queue family variables have type std::optional< uint32_t > which is why we can both check them as bools (testing if they contain a value) and set them to indices.

Device and instance extensions are separate. We only need one device extension: the one that lets us create swapchains. (Well, and one for portability on macOS.)

in RTG.cpp
//in RTG::RTG:
//select device extensions:
std::vector< const char * > device_extensions;
//TODO
#if defined(__APPLE__)
device_extensions.emplace_back(VK_KHR_PORTABILITY_SUBSET_EXTENSION_NAME);
#endif
//Add the swapchain extension:
device_extensions.emplace_back(VK_KHR_SWAPCHAIN_EXTENSION_NAME);

There are also parameters in device creation for device layers, but these are deprecated because it wasn't clear what they were useful for.

Now that we know the indices of the queues we want, and the device extensions to enable, we can construct the appropriate create info structure and actually make the device:

in RTG.cpp
//in RTG::RTG:
{ //create the logical device:
	//TODO
	std::vector< VkDeviceQueueCreateInfo > queue_create_infos;
	std::set< uint32_t > unique_queue_families{
		graphics_queue_family.value(),
		present_queue_family.value()
	};

	float queue_priorities[1] = { 1.0f };
	for (uint32_t queue_family : unique_queue_families) {
		queue_create_infos.emplace_back(VkDeviceQueueCreateInfo{
			.sType = VK_STRUCTURE_TYPE_DEVICE_QUEUE_CREATE_INFO,
			.queueFamilyIndex = queue_family,
			.queueCount = 1,
			.pQueuePriorities = queue_priorities,
		});
	}

	VkDeviceCreateInfo create_info{
		.sType = VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO,
		.queueCreateInfoCount = uint32_t(queue_create_infos.size()),
		.pQueueCreateInfos = queue_create_infos.data(),

		//device layers are depreciated; spec suggests passing instance_layers or nullptr:
		.enabledLayerCount = 0,
		.ppEnabledLayerNames = nullptr,

		.enabledExtensionCount = static_cast< uint32_t>(device_extensions.size()),
		.ppEnabledExtensionNames = device_extensions.data(),

		//pass a pointer to a VkPhysicalDeviceFeatures to request specific features: (e.g., thick lines)
		.pEnabledFeatures = nullptr,
	};

	VK( vkCreateDevice(physical_device, &create_info, nullptr, &device) );

	vkGetDeviceQueue(device, graphics_queue_family.value(), 0, &graphics_queue);
	vkGetDeviceQueue(device, present_queue_family.value(), 0, &present_queue);
}

Notice that we also take the time to get handles to the queue(s) after creating the device. (If the queue indices are the same, this is just getting two handles to the same queue, but that's okay.)

Congratulations

You've now removed all of the refsol:: code from RTG.cpp and the build as a whole. Nicely done.

Celebrate by removing the refsol.hpp include:

in RTG.cpp
#include "RTG.hpp"

#include "VK.hpp"
#include "refsol.hpp"

#include <vulkan/vulkan_core.h>
//...

And then go even further by patching the refsol.o out of the build altogether:

in Maekfile.js
//...

const prebuilt_objs = [ ];

//use the prebuilt refsol.o unless refsol.cpp exists:
if (require('fs').existsSync('refsol.cpp')) {
	const refsol_shaders = [
		maek.GLSLC('refsol-background.vert'),
		maek.GLSLC('refsol-background.frag'),
	];
	main_objs.push( maek.CPP('refsol.cpp', `pre/${maek.OS}-${process.arch}/refsol`, { depends:refsol_shaders } ) );
} else {
	prebuilt_objs.push(`pre/${maek.OS}-${process.arch}/refsol${maek.DEFAULT_OPTIONS.objSuffix}`);
}

const main_exe = maek.LINK([...main_objs], 'bin/main');

//...

The code should continue to build and run without problems.

Afterword

Wow! You made it! You now know everything you need to know to make your own pipelines, shaders, render passes, and more. Nice job.

If you have any comments on this tutorial, or even if you just want to show off some of the cool creative exercises you've embarked upon, feel free to file an issue against the nakluV repository.