select_wrapper_t wraps up the annoying bits of using select(): keeping
track of the max fd, passing null for boring parameters, and
constructing the timeout. Introduce a wrapper struct for this and
replace the existing uses of select() with the wrapper.
Previously iothread_perform could do something on a background thread, and
then do something on the main thread. But we no longer use that second
part: instead everything goes through debounce. Remove the completion
parameter from iothread_perform.
Previously fish attempted to block all signals on background threads, so
that they would be delivered to the main thread. But on Mac, SIGSEGV
and probably some others just get silently dropped, leading to potential
infinite loops instead of crashing. So stop blocking these signals.
With this change the null-deref in #7837 will properly crash instead of
spinning.
The iothread pool has a feature where, if the thread is emptied, some
threads will choose to wait around in case new work appears, up to a
certain amount of time (500 msec). This prevents thrashing where new
threads are rapidly created and destroyed as the user types. This is
implemented via `std::condition_variable::wait_for`. However this function
is not properly instrumented under Thread Sanitizer (see
https://github.com/google/sanitizers/issues/1259) so TSan reports false
positives. Just disable this feature under TSan.
queues use std::deque under the hood which is more expensive than a vector.
We always consume the entire queue so there is no advantage to use deque here.
Just use a vector.
Replace the complicated implementation which shared a condition variable, with
one which just uses std::future<void>. This may allocate more condition
variables but is much simpler.
This concerns how fish prevents its own fds from interfering with
user-defined fd redirections, like `echo hi >&5`. fish has historically
done this by tracking all user defined redirections when running a job,
and ensuring that pipes are not assigned the same fds. However this is
annoying to pass around - it means that we have to thread user-defined
redirections into pipe creation.
Take a page from zsh and just ensure that all pipes we create have fds in
the "high range," which here means at least 10. The primary way to do this
is via the F_DUPFD_CLOEXEC syscall, which also sets CLOEXEC, so we aren't
invoking additional syscalls in the common case. This will free us from
having to track which fds are in user-defined redirections.
It may happen that the user types an abbreviation and then hits return.
Prior to this commit, we would perform a form of syntax highlighting
that does not require I/O, so as to not block the user. However this
could cause invalid commands to be colored as valid.
More generally if the user has e.g a slow NFS mount, then syntax
highlighting may lag behind the user's typing, and be incorrect at the
time the user hits return. This is an unavoidable race, since proper
syntax highlighting may take arbitrarily long.
Introduce a new function `finish_highlighting_before_exec`, which waits
for any outstanding syntax highlighting to complete, BUT has a timeout
(250 milliseconds). After this, it falls back to the no-I/O variant, which
colors all commands as valid and nothing as paths.
Fixes#7418Fixes#5912
debounce_t will be used to limit thread creation from background highlighting
and autosuggestion scenarios. This is a one-element queue backed by a
single thread. New requests displace any existing queued request; this
reflects the fact that autosuggestions and highlighting only care about
the most recent result.
A timeout allows for abandoning hung threads, which may happen if you
attempt to e.g. access a dead hard-mounted NFS server. We don't want
this to defeat autosuggestions and highlighting permanently, so allow
spawning a new thread after the timeout (here 500 ms).
Sometimes we must spawn a new thread, to avoid the risk of deadlock.
Ensure we always spawn a thread in those cases. In particular this
includes the fillthread.
64 is too low (it's actually reachable), and every sensible system should have a limit above
this.
On OpenBSD and FreeBSD it's ULONG_MAX, on my linux system it's 61990.
Plus we currently fail by hanging if our limit is reached, so this
should improve things regardless.
On my linux system _POSIX_THREAD_THREADS_MAX works out to 64 here,
which is just too low, even tho the system can handle more.
Fixes#6503 harder.
This reintroduces commits 22230a1a0d
and 9d7d70c204, now with the bug fixed.
The problem was when there was one thread waiting in the pool. We enqueue
an item onto the pool and attempt to wake up the thread. But before the
thread runs, we enqueue another item - this second enqueue will see the
thread waiting and attempt to wake it up as well. If the two work items
were dependent (reader/writer) then we would have a deadlock.
The fix is to check if the number of waiting threads is at least as large
as the queue. If the number of enqueued items exceeds the number of waiting
threads, then spawn a new thread always.
Improve the iothread behavior by enabling an iothread to stick around for
a while waiting for work. This reduces the amount of iothread churn, which
is useful on platforms where threads are expensive.
Also do other modernization like clean up the locking discipline and use
FLOG.
This runs build_tools/style.fish, which runs clang-format on C++, fish_indent on fish and (new) black on python.
If anything is wrong with the formatting, we should fix the tools, but automated formatting is worth it.