This was done to SDL_DisplayMode for consistency with SDL_Surface and gives it a type so we don't have to do casts in SDL code.
I considered switching to an ID and hashing the driver data, etc. but all of that involved a lot of internal code churn and this solution gives us flexibility in how we handle this in the future.
After consideration, I made this renaming global across the project, for consistency.
Fixes https://github.com/libsdl-org/SDL/issues/10198
SDL_Surface has been simplified and internal details are no longer in the public structure.
The `format` member of SDL_Surface is now an enumerated pixel format value. You can get the full details of the pixel format by calling `SDL_GetPixelFormatDetails(surface->format)`. You can get the palette associated with the surface by calling SDL_GetSurfacePalette(). You can get the clip rectangle by calling SDL_GetSurfaceClipRect().
SDL_PixelFormat has been renamed SDL_PixelFormatDetails and just describes the pixel format, it does not include a palette for indexed pixel types.
SDL_PixelFormatEnum has been renamed SDL_PixelFormat and is used instead of Uint32 for API functions that refer to pixel format by enumerated value.
SDL_MapRGB(), SDL_MapRGBA(), SDL_GetRGB(), and SDL_GetRGBA() take an optional palette parameter for indexed color lookups.
The flags parameter has been removed from SDL_CreateRenderer() and SDL_RENDERER_PRESENTVSYNC has been replaced with SDL_PROP_RENDERER_CREATE_PRESENT_VSYNC_NUMBER during window creation and SDL_PROP_RENDERER_VSYNC_NUMBER after renderer creation.
SDL_SetRenderVSync() now takes additional values besides 0 and 1.
The maximum texture size has been removed from SDL_RendererInfo, replaced with SDL_PROP_RENDERER_MAX_TEXTURE_SIZE_NUMBER.
Previously, each backend would allocate and free the renderer struct. Now
the higher level does it, so the backends only manage their private resources.
This removes some boilerplate and avoids some potential accidents.
This better reflects how HDR content is actually used, e.g. most content is in the SDR range, with specular highlights and bright details beyond the SDR range, in the HDR headroom.
This more closely matches how HDR is handled on Apple platforms, as EDR.
This also greatly simplifies application code which no longer has to think about color scaling. SDR content is rendered at the appropriate brightness automatically, and HDR content is scaled to the correct range for the display HDR headroom.
On some systems creating the entire set of available pipeline states is very time consuming. We'll only use a few of them in any given program, so we'll just create them on demand.
Fixes https://github.com/libsdl-org/SDL/issues/7634
The renderer will always use the sRGB colorspace for drawing, and will default to the sRGB output colorspace. If you want blending in linear space and HDR support, you can select the scRGB output colorspace, which is supported by the direct3d11 and direct3d12
You can't do blending directly in PQ space, which means you have to create a scene render target in linear space and use shaders to convert PQ texture data to linear, etc. All of this is out of scope for the SDL 2D renderer at the moment.
This allows color operations to happen in linear space between sRGB input and sRGB output. This is currently supported on the direct3d11, direct3d12 and opengl renderers.
This is a good resource on blending in linear space vs sRGB space:
https://blog.johnnovak.net/2016/09/21/what-every-coder-should-know-about-gamma/
Also added testcolorspace to verify colorspace changes