ThreadId is way slower than it should be for the sense that we use it in; it
doesn't cache the id and allocates an Arc internally.
We don't care about the thread id used in crate::threads correlating with any
other thread id the code uses anywhere (not that it does) because it's only used
for our own bookkeeping. Change to something much simpler instead.
Verified that std::sync::OnceLock<T> compiles to the same assembly at the
*access* site as the Option<T> we were using. The additional overhead upon init
is fine. No need for extra Box<T> indirection for IO_THREAD_POOL.
While obtaining an uncontested mutex from the same thread (without reentrance)
is basically ~free, the use of `MainThread<RefCell<T>>` instead of `Mutex<T>`
makes it clear that there is no actual synchronization taking place, hopefully
making the code easier to understand.
The C++ version of this code simply copied the entire uvar table.
Today we take a reference. It's not clear which one is better.
Removal of locale variables like LC_ALL triggers variable change handlers
which call EnvStackImpl::get. This deadlocks because we still hold the lock
to protect the reference to all uvars. Work around this.
Closes#10513
It is short and simple enough to write yourself if you need it and it encourages
bad behavior by a) always returning owned strings, b) always allocating them in
a vector. If/where possible, it is better to a) use &wstr, b) use an iterator.
In rust, it's an anti-pattern to unnecessarily abstract over allocating
operations. Some of the call sites even called split_string(..).into_iter().
This updates is_windows_subsystem_for_linux() to take a WSL version to test for
(any, v1, or v2) and returns the boolean result depending on the system. I've
benchmarked and when running on regular Linux, this is still just as fast as the
previous binary check; it's only when it's WSL that this takes about 20ns
longer to figure out which variant.
Note that older WSLv2 kernels had a `-microsoft-standard` suffix while newer
ones appear to have a `-microsoft-standard-WSL2` suffix, so we make sure to test
for the least common denominator. (It doesn't matter to us, but note that newer
WSLv2 kernels have four dots in the version string!)
WSL workarounds pertaining to the default Windows terminal or executable
behavior of win32 binaries under a WSL shell are extended to WSLv2 while those
specific to oddities in kernel behavior are confined to WSLv1 only. (It
technically wouldn't hurt to extend them to WSLv2 but there's no good reason to
do so, either.)
A common complaint has been the massive amount of directories Windows appends to
$PATH slowing down fish when it attempts to find a non-existent binary (which it
does a lot more often than someone not in the know might think). The typical
workaround suggested is to trim unneeded entries from $PATH, but this a) has
considerable friction, b) breaks resolution of Windows binaries (you can no
longer use `clip.exe`, `cmd.exe`, etc).
This patch introduces a two-PATH workaround. If the cmd we are executing does
not contain a period (i.e. has no extension) it by definition cannot be a
Windows executable. In this case, we skip searching for it in any of the
auto-mounted, auto-PATH-appended directories like `/mnt/c/Windows/` or
`/mnt/c/Program Files`, but we *do* include those directories if what we're
searching for could be a Windows executable. (For now, instead of hard-coding a
list of known Windows executable extensions like .bat, .cmd, .exe, etc, we just
depend on the presence of an extension at all).
e.g. this is what starting up fish prints with logging enabled (that has been
removed):
bypassing 100 dirs for lookup of kill
bypassing 100 dirs for lookup of zoxide
bypassing 100 dirs for lookup of zoxide
bypassing 100 dirs for lookup of fd
not bypassing dirs for lookup of open.exe
not bypassing dirs for lookup of git.exe
This has resulted in a massive speedup of common fish functions, especially
anywhere we internally use or perform the equivalent of `if command -q foo`.
Note that the `is_windows_subsystem_for_linux()` check will need to be patched to
extend this workaround to WSLv2, but I'll do that separately.
Under WSL:
* Benchmark `external_cmds` improves by 10%
* Benchmark `load_completions` improves by an incredible 77%
On this binding we fail to disable CSI u
bind c-t '
begin
set -lx FZF_DEFAULT_OPTS --height 40% --bind=ctrl-z:ignore
eval fzf | while read -l r; echo read $r; end
end
'
because for "fzf", ParseExecutionContext::setup_group() returns early with the
parent process group (which should be fish's own) , hence "wants_terminal"
is false. This seems questionable, I don't think the eval should make a
difference here.
For now, don't touch it; use the more accurate way of detecting whether
a process may read keyboard input. In many of such cases "wants_terminal"
is false, like
echo (echo 1\n2\n3 | fzf)
Fixes#10504
This hot function dominates the flamegraphs for the completions thread, and any
optimizations are worthwhile.
A variety of different approaches were tested and benchmarked against real-world
fish-history file inputs and this is the one that won out across all rustc
target-cpu variations tried.
Benchmarks and code at https://github.com/mqudsi/fish-yaml-unescape-benchmark
We don't forward this variable for storage in any structs, so there's no reason
to go through an Arc instead of returning the `&'static EnvStack` directly.
NB: This particular change was safe, and passes all tests on its own.
We don't forward this variable for storage in any structs, so there's no reason
to go through an Arc instead of returning the `&'static EnvStack` directly.
These have clearer sync/unsync semantics and now ship with rust itself.
They don't paper over any possible cross-thread issues, and we can specifically
choose which we want for the purpose.
`Parser` is a single-threaded `!Send`, `!Sync` type and does not need to use
`Arc` for anything. We were using it because that's all we had for the parser's
`EnvStack`, but though that is *technically* protected internally by a mutex
(shared with global EnvStack), there's nothing to say that other parsers with a
narrower scope/lifetime on other threads will be necessarily using the same
backing mutex.
We can safely marshal the existing `Arc<EnvStack>` we get from
`environment::principal()` into an `Rc<EnvStack>` since the underlying reference
is always valid. To prove this point, we could have PRINCIPAL_STACK be a static
`EnvStack` and have `environment::principal()` use `Arc::from_raw()` to turn
that into an `Arc<EnvStack>`, but there's no need to factorize this process.
By inverting the order of storage, we can use an `OnceCell`/`unsync::Lazy`
inside the Send/Sync `MainThread<T>` and remove the need for a lock altogether.
It's reasonable since this is only checking to see that the history file
contains the expected format and if it's corrupted but we at least got what we
expect to be the correct key/value pairs, then that's all we can do.
Of course the real motivation is to speed up this very hot function in any way
possible!
On the completions and history thread, the parent function
HistoryFileContents::decode_item() is responsible for ~60% of the CPU time, and
extract_prefix_and_unescape_yaml() alone comprising 14% (of the total).
This change removes allocations in the event that the history item is either
fully or partially plain yaml with no escapes to begin with, and brings down the
execution time of this function to only 7% of the total execution time.
The bulk of the remaining time is spent in wcs2string(), which is called
unconditionally and is naturally alloc-heavy.
This allows running `set` without triggering any event handlers.
That is useful, for example, if you want to set a variable in an event
handler for that variable - we could do it, for example, in the
fish_user_path or fish_key_bindings handlers.
This is something the `block` builtin was supposed to be for, but it
never really worked because it only allows suppressing the event for
the duration, they would fire later. See #9030.
Because it is possible to abuse this, we only have a long-option so
that people see what is up.
Commit a583fe723 ("commandline -f foo" to skip queue and execute immediately,
2024-04-08) fixed the execution order of some bindings but was partially
backed out in 5ba21cd29 (Send repaint requests through the input queue again,
2024-04-19) because repainting outside toplevel yields surprising results
(wrong $status etc).
Transient prompts wants to first repaint and then execute some more readline
commands, all within a single binding. This was broken by the second commit
because that one defers the repaint until after the binding has finished.
Work around this problem by deferring input events again while a readline
event was queued. This is closest to the historical behavior.
The implementation feels hacky; we might find odd situations.
For example,
commandline -f repaint end-of-line
set token (commandline -t)
sets the wrong token.
Probably not a very important case. We could throw an error or make it work
by letting "commandline -t" drain the input queue.
That seems too complicated, better change repaints to not use the input queue
(and fake $status etc). Let's try to do that in future.
Closes#10492
[w]open_dir does not pass O_CREAT, so the mode argument to open is never used.
also, O_CREAT | O_DIRECTORY could not be used (portably) to create a directory.
(on POSIX does not specify what should happen, on Linux it is EINVAL.)
rustc 1.80 now complains about features not declared in Cargo.toml and cfg
keys/values not declared by build.rs to protect against typos or misuse (you
think you're using the right condition but you're not). See
rust-lang/cargo#10554 and rust-lang/rust#82450.
(We're not actually using TSAN under CI at this time, but I do want to re-enable
it at some point — especially if we get multithreaded execution going — using
the rust-native TSAN configuration.)
I'll be updating the `rsconf` crate and patching `build.rs` accordingly to also
handle the warnings about unknown cfg values, but tsan is a feature and not a
cfg and these can be dealt with in `Cargo.toml` directly.
We were passing a slice (and not a vec) to `CString::new()`, meaning it would
allocate a new Vec internally to hold the bytes.
Also document that the resulting CString will be silently truncated at the first
interior NUL.
The function was repeatedly calling `s.char_at(n)` which is O(1) only for UTF-32
strings (so not a problem at the moment). But it was also calling `hex_digit(n)`
twice for each `n` in the 3-digit case, causing unnecessary repeated parsing of
individual characters into their radix-16 numeric equivalents, which could be
avoided just by reusing the already calculated result.