Follow-up to #8720 adding
* Two improvements to FreeType glyph measurements:
- Ensuring that glyphs are measured with the same hinting as they are
rendered, ref
[#8720#issuecomment-3305408157](https://github.com/ghostty-org/ghostty/pull/8720#issuecomment-3305408157);
- For outline glyphs, using the outline bbox instead of the built-in
metrics, like `renderGlyph()`.
* Basic unit tests for face metrics and their estimators, using the
narrowest and widest fonts from the resource directory, Cozette Vector
and Geist Mono.
---
I also made one unrelated change to `freetype.zig`, replacing
`@alignCast(@ptrCast(...))` with `@ptrCast(@alignCast(...))` on line
173. Autoformatting has been making this change on every save for weeks,
and reverting the hunk before each commit is getting old, so I hope it's
OK that I use this PR to upstream this decree from the formatter.
We now also have absolute perfect control over the raster position under
FreeType as well. This means that, for example, powerline extended chars
are appropriately clamped to the cell edges at all sizes.
This should be purely an improvement over what we had before, and now it
also matches what we do for CoreText.
This adds functionality for choosing different normalization metrics for
each fallback font. It's not exposed as a config option, but could be in
the future, which would probably go a long way towards addressing
concerns like #7929.
The currently available reference metrics are, in priority order:
`ic_width, ex_height, cap_height, line_height, em_size`. The default
value is `ic_width`.
By priority order, I mean that if the chosen metric is not defined in
the fallback font, we move to the next metric in the list---we don't
normalize by an estimated metric from the fallback font (however, we're
happy to use an estimated metric from the primary font, that's how
`ic_width` normalization between CJK and Latin fonts work). This extends
the pattern that was used between `ic_width` and `ex_height` in the
existing hardcoded rule. `line_height` is always defined, so the buck
stops there.
What motivated me to implement this was the fact that, with the existing
hardcoded rule, the embedded symbols-only Nerd Font was always scaled up
by a factor of 1.2, which turned out to be an important reason why it's
been difficult to make icon scaling work to everyone's satisfaction.
Accordingly, the symbols-only font is the first to take advantage of the
new functionality. If this PR is merged, #7917 is no longer needed. (To
limit the scope of this PR, it only includes the minimal changes to let
icon scaling take advantage of this functionality. I may submit a
follow-up PR with some further icon scaling improvement enabled by
this.)
This makes the `new_window` action properly inherit properties from the
parent surface that initiated the action. Today, that is only the pwd
and font size.
We do this by characterizing the shared bounding boxes in a static copy
of the symbols only nerd font when we're doing the codegen. This allows
us to get results of our scaling that are just as good as in a patched
font, since related glyphs can now be sized and positioned relative to
each other.
This stops things like folder icons from becoming over-wide. The patcher
typically makes these glyphs always 1 cell wide, but since we know how
it will be displayed we have the benefit of being able to make it more
than 1 cell when there's room. This makes our dynamic scaling *better*
than a static patched font :D
Icons were often WAY too big before because they were filling the whole
cell in height, which isn't great lol. This commit adds an `icon_height`
metric which is used to constrain glyphs that shouldn't be the size of
the entire cell.
This mostly applies to powerline glyphs, but is also relevant for heavy
bracket characters, which need to always be 1 wide otherwise they look
silly because they misalign depending on if there's a space after them
or not.
This is in preparation to move constraint off the GPU to simplify our
shaders, instead we only need to constrain once at raster time and never
again.
This also significantly reworks the freetype renderGlyph function to be
generally much cleaner and more straightforward.
This commit doesn't actually apply the constraints to anything yet, that
will be in following commits.
This sets the stage for dynamically adjusting the sizes of fallback
fonts based on the primary font's face metrics. It also removes a lot of
unnecessary work when loading fallback fonts, since we only actually use
the metrics based on the parimary font.
This is achieved by rendering to an alpha-only context rather than a
normal single-channel context, and adjusting the brightness at which
coretext thinks it's drawing the glyph, which affects how it applies
font smoothing (which is what `font-thicken` enables).
Allows for high dpi displays to get odd numbered pixel sizes, for
example, 13.5pt @ 2px/pt for 27px font. This implementation performs
all the sizing calculations with f32, rounding to the nearest pixel
size when it comes to rendering. In the future this can be enhanced
by adding fractional scaling to support fractional pixel sizes.
Fixes#1618
Font sizes in configuration were always a u8, but the keybinding and
internal state was a u16 so it allowed for an ever-growing font size. At
a certain point, there is an integer overflow which causes it to wrap
around. This is all silly, 255 should be large enough for anyone[1]
[1]: Ready to be super wrong about this
Fixes#895
Every loaded font face calculates metrics for itself. One of the
important metrics is the baseline to "sit" the glyph on top of. Prior to
this commit, each rasterized glyph would sit on its own calculated
baseline. However, this leads to off-center rendering when the font
being rasterized isn't the font that defines the terminal grid.
This commit passes in the font metrics for the font defining the
terminal grid to all font rasterization requests. This can then be used
by non-primary fonts to sit the glyph according to the primary grid.
Fixes#845
Quick background: Emoji codepoints are either default text or default
graphical ("Emoji") presentation. An example of a default text emoji
is ❤. You have to add VS16 to this emoji to get: ❤️. Some font are
default graphical and require VS15 to force text.
A font face can only advertise text vs emoji presentation for the entire
font face. Some font faces (i.e. Cozette) include both text glyphs and
emoji glyphs, but since they can only advertise as one, advertise as
"text".
As a result, if a user types an emoji such as 👽, it will fallback to
another font to try to find a font that satisfies the "graphical"
presentation requirement. But Cozette supports 👽, its just advertised
as "text"!
Normally, this behavior is what you want. However, if a user explicitly
requests their font-family to be a font that contains a mix of test and
emoji, they _probably_ want those emoji to be used regardless of default
presentation. This is similar to a rich text editor (like TextEdit on
Mac): if you explicitly select "Cozette" as your font, the alien emoji
shows up using the text-based Cozette glyph.
This commit changes our presentation handling behavior to do the
following:
* If no explicit variation selector (VS15/VS16) is specified,
any matching codepoint in an explicitly loaded font (i.e. via
`font-family`) will be used.
* If an explicit variation selector is specified or our explicitly
loaded fonts don't contain the codepoint, fallback fonts will be
searched but require an exact match on presentation.
* If no fallback is found with an exact match, any font with any
presentation can match the codepoint.
This commit should generally not change the behavior of Emoji or VS15/16
handling for almost all users. The only users impacted by this commit
are specifically users who are using fonts with a mix of emoji and text.
Font metrics realistically should be integral. Cell widths, cell
heights, etc. do not make sense to be floats, since our grid is
integral. There is no such thing as a "half cell" (or any point).
The reason we historically had these all as f32 is simplicity mixed
with history. OpenGL APIs and shaders all use f32 for their values, we
originally only supported OpenGL, and all the font rendering used to be
directly in the renderer code (like... a year+ ago).
When we refactored the font metrics calculation to its own system and
also added additional renderers like Metal (which use f64, not f32), we
never updated anything. We just kept metrics as f32 and casted
everywhere.
With CoreText and #177 this finally reared its ugly head. By forgetting
a simple rounding on cell metric calculation, our integral renderers
(sprite fonts) were off by 1 pixel compared to the GPU renderers.
Insidious.
Let's represent font metrics with the types that actually make sense: a
cell width/height, etc. is _integral_. When we get to the GPU, we now
cast to floats. We also cast to floats whenever we're doing more precise
math (i.e. mouse offset calculation). In this case, we're only
converting to floats from a integral type which is going to be much
safer and less prone to uncertain rounding than converting to an int
from a float type.
Fixes#177