This has a few advantages,
* We now statically assert that all fields used by a particular block type are
correctly initialized (i.e. you can't assign the function name but forget to
assign its arguments),
* Conversely, we can match directly on `BlockData` and be guaranteed that the
fields we want to access are initialized and present,
* We reduce the number of assertions, effectively "unwrapping" only once based
off the block type instead of each time we try to access a conditional field,
* We reduce the size of the `Block` struct by coalescing fields that cannot
co-exist, bringing it down from 104 bytes to 88 bytes.
It would be nice to make all of `Block` itself an enum, but it currently
requires `Copy` and we take advantage of that to copy it around everywhere.
Putting these fields directly in `Block` directly would mean a lot more memory
traffic just checking block types.
Mostly replacing std::<type>::MAX with <type>::MAX.
Surprising here is replacing
.expect(format!(...))
with
.unwrap_or_else(|_| panic!(...))
It explains that this is because the "format!" would always be called.
Hex float parsing may come about through wcstod, for example:
printf "%f" '0x8p2'
should output 32.0.
Currently we use a not-great fork of hexponent. Hexponent has been dormant for
years, and has some issues: doesn't round properly, allocates unnecessarily,
doesn't handle denormals, is more complicated than necessary.
Just rewrite hex float parsing, fixing those problems and getting us off of this
weird fork.
It is short and simple enough to write yourself if you need it and it encourages
bad behavior by a) always returning owned strings, b) always allocating them in
a vector. If/where possible, it is better to a) use &wstr, b) use an iterator.
In rust, it's an anti-pattern to unnecessarily abstract over allocating
operations. Some of the call sites even called split_string(..).into_iter().
As documented in #10474, there are issues with 64-bit floating point rounding
under x86 targets without SSE2 extensions, where x87 floating point math causes
imprecise results.
Document the shortcoming and provide some version of the test that passes
regardless of architecture.
If a key's codepoint is in the PUA1 range, it could
be either from our own named keys (like key::Space)
or from a CSI u key that we haven't assigned a name yet
https://sw.kovidgoyal.net/kitty/keyboard-protocol/#functional-key-definitions
(The latter can still be bound using the \u1234 or the equivalent \e[4660u
raw CSI u sequence.)
It doesn't make sense to insert a PUA character into the commandline when
the user presses PrintScreen; ignore them silently.
This partially reverts b77d1d0e2 (Stop crashing on invalid Unicode input,
2024-02-27). That commit did:
1. convert input byte sequences that map to a PUA codepoint into several
characters, using our on-char-per-byte PUA encoding.
2. do the same for inputs that are codepoints outside the valid Unicode range.
3. render them as replacement character (one per input byte)
In future, we should probably remove these features altogether, and simply
ignore invalid Unicode code points.
More work in prep for having wopen_cloexec() return `File` directly.
This eliminates checking for an invalid fd and makes both ownership and
mutability clear (some more operations that involve changes to the underlying
state of the fd now require `&mut File` instead of just a `RawFd`).
Code that clearly does not use non-blocking IO is ported to use
`Write::write_all()` directly instead of our rusty port of the `write_loop()`
function (which handles EAGAIN/EWOULDBLOCK in addition to EINTR, while
`write_all()` only handles the latter).
* Fix build on NetBSD
Notably:
1. A typo in `f_flag` vs `f_flags` - this was probably never tested
2. Some pointless name differences - `st_mtimensec` vs
`st_mtime_nsec`
3. The big one: This said that LC_GLOBAL_LOCALE() was -1 "everywhere".
Well, not on NetBSD.
* ifdef for macos