SDL_Surface has been simplified and internal details are no longer in the public structure.
The `format` member of SDL_Surface is now an enumerated pixel format value. You can get the full details of the pixel format by calling `SDL_GetPixelFormatDetails(surface->format)`. You can get the palette associated with the surface by calling SDL_GetSurfacePalette(). You can get the clip rectangle by calling SDL_GetSurfaceClipRect().
SDL_PixelFormat has been renamed SDL_PixelFormatDetails and just describes the pixel format, it does not include a palette for indexed pixel types.
SDL_PixelFormatEnum has been renamed SDL_PixelFormat and is used instead of Uint32 for API functions that refer to pixel format by enumerated value.
SDL_MapRGB(), SDL_MapRGBA(), SDL_GetRGB(), and SDL_GetRGBA() take an optional palette parameter for indexed color lookups.
This reverts commit 9f8dffbd2d.
This causes some tests to fail, and wasn't otherwise a necessary change, so
I'm backing it out.
(Looks like some sort of interaction with software renderers and their
surfaces not getting destroyed...?)
After discussion with @ocornut, SDL_RenderGeometryRaw() will take floating point colors and conversion from 8-bit color can happen on the application side. We can always add an 8-bit color fast path in the future if we need it on handheld platforms.
If you need code to do this in your application, you can use the following:
int SDL_RenderGeometryRaw8BitColor(SDL_Renderer *renderer, SDL_Texture *texture, const float *xy, int xy_stride, const SDL_Color *color, int color_stride, const float *uv, int uv_stride, int num_vertices, const void *indices, int num_indices, int size_indices)
{
int i, retval, isstack;
const Uint8 *color2 = (const Uint8 *)color;
SDL_FColor *color3;
if (num_vertices <= 0) {
return SDL_InvalidParamError("num_vertices");
}
if (!color) {
return SDL_InvalidParamError("color");
}
color3 = (SDL_FColor *)SDL_small_alloc(SDL_FColor, num_vertices, &isstack);
if (!color3) {
return -1;
}
for (i = 0; i < num_vertices; ++i) {
color3[i].r = color->r / 255.0f;
color3[i].g = color->g / 255.0f;
color3[i].b = color->b / 255.0f;
color3[i].a = color->a / 255.0f;
color2 += color_stride;
color = (const SDL_Color *)color2;
}
retval = SDL_RenderGeometryRaw(renderer, texture, xy, xy_stride, color3, sizeof(*color3), uv, uv_stride, num_vertices, indices, num_indices, size_indices);
SDL_small_free(color3, isstack);
return retval;
}
Fixes https://github.com/libsdl-org/SDL/issues/9009
These are integer values internally, but the API has been changed to make it easier to mix other render code with querying those values.
Fixes https://github.com/libsdl-org/SDL/issues/7519
It's possible to destroy an SDL_Renderer without freeing it using
SDL_DestroyRendererWithoutFreeing(), which is used to make it possible
to destroy windows and their renderers in either order. However, if a
renderer has already been destroyed before it is freed (e.g., the window
was destroyed before the renderer), the object is never marked invalid.
This means the SDL_Renderer is reported as leaked, even if
SDL_DestroyRenderer() is called.
SDL_GetWindowSurface() will trigger this, as the window texture is
cleaned up _after_ the window destroys its associated renderer. This
makes it impossible to use SDL_FRAMEBUFFER_ACCELERATION without
triggering a leak warning.
Fix this by unconditionally marking the SDL_Renderer object as invalid
in SDL_DestroyRenderer().
This declares that any `const char *` returned from SDL is owned by SDL, and
promises to be valid _at least_ until the next time the event queue runs, or
SDL_Quit() is called, even if the thing that owns the string gets destroyed
or changed before then.
This is noted in the headers as "the SDL_GetStringRule", so this will both be
greppable to find a detailed explaination in docs/README-strings.md and
wikiheaders will automatically turn it into a link we can point at the
appropriate documentation.
Fixes#9902.
(and several FIXMEs, both known and yet-undocumented.)
The flags parameter has been removed from SDL_CreateRenderer() and SDL_RENDERER_PRESENTVSYNC has been replaced with SDL_PROP_RENDERER_CREATE_PRESENT_VSYNC_NUMBER during window creation and SDL_PROP_RENDERER_VSYNC_NUMBER after renderer creation.
SDL_SetRenderVSync() now takes additional values besides 0 and 1.
The maximum texture size has been removed from SDL_RendererInfo, replaced with SDL_PROP_RENDERER_MAX_TEXTURE_SIZE_NUMBER.
This allows apps to destroy the window and renderer in either order, but
makes sure that the renderer can properly clean up its resources while OpenGL
contexts and libraries are still loaded, etc.
If the window is destroyed first, the renderer is (mostly) destroyed but its
pointer remains valid. Attempts to use the renderer will return an error,
but it can still be explicitly destroyed, at which time the struct is free'd.
If the renderer is destroyed first, everything works as before, and a new
renderer can still be created on the existing window.
Fixes#9540.
Previously, each backend would allocate and free the renderer struct. Now
the higher level does it, so the backends only manage their private resources.
This removes some boilerplate and avoids some potential accidents.
Previously there were two different paths for renderer creation, and the HDR metadata initialization was missing when creating a software renderer for a surface. Now all the cases are handled in a single path, so regardless of whether you create a software renderer by name, a software renderer for a surface, or fall back to a software renderer, you'll get the correct initialization in all cases.
Fixes https://github.com/libsdl-org/SDL/issues/9221
This pull request adds an implementation of a Vulkan Render backend to SDL. I have so far tested this primarily on Windows, but also smoke tested on Linux and macOS (MoltenVK). I have not tried it yet on Android, but it should be usable there as well (sans any bugs I missed). This began as a port of the SDL Direct3D12 Renderer, which is the closest thing to Vulkan as existed in the SDL codebase. The shaders are more or less identical (with the only differences being in descriptor bindings vs root descriptors). The shaders are built using the HLSL frontend of glslang.
Everything in the code is pure Vulkan 1.0 (no extensions), with the exception of HDR support which requires the Vulkan instance extension `VK_EXT_swapchain_colorspace`. The code could have been simplified considerably if I used dynamic rendering, push descriptors, extended dynamic state, and other modern Vulkan-isms, but I felt it was more important to make the code as vanilla Vulkan as possible so that it would run on any Vulkan implementation.
The main differences with the Direct3D12 renderer are:
* Having to manage renderpasses for performing clears. There is likely some optimization that would still remain for more efficient use of TBDR hardware where there might be some unnecessary load/stores, but it does attempt to do clears using renderpasses.
* Constant buffer data couldn't be directly updated in the command buffer since I didn't want to rely on push descriptors, so there is a persistently mapped buffer with increasing offset per swapchain image where CB data gets written.
* Many more resources are dependent on the swapchain resizing due to i.e. Vulkan requiring the VkFramebuffer to reference the VkImageView of the swapchain, so there is a bit more code around handling that than was necessary in D3D12.
* For NV12/NV21 textures, rather than there being plane data in the texture itself, the UV data is placed in a separate `VkImage`/`VkImageView`.
I've verified that `testcolorspace` works with both sRGB and HDR linear. I've tested `testoverlay` works with the various YUV/NV12/NV21 formats. I've tested `testsprite`. I've checked that window resizing and swapchain out-of-date handling when minimizing are working. I've run through `testautomation` with the render tests. I also have run several of the tests with Vulkan validation and synchronization validation. Surely I will have missed some things, but I think it's in a good state to be merged and build out from here.